FT商學院

Make AI safe again

It is a fallacy to believe there is a trade-off between regulation and innovation

When the Chernobyl nuclear power plant exploded in 1986 it was a catastrophe for those who lived nearby in northern Ukraine. But the accident was also a disaster for a global industry pushing nuclear energy as the technology of the future. The net number of nuclear reactors has pretty much flatlined since as it was seen as unsafe. What would happen today if the AI industry suffered an equivalent accident?

That question was posed on the sidelines of this week’s AI Action Summit in Paris by Stuart Russell, a professor of computer science at the University of California, Berkeley. His answer was that it was a fallacy to believe there has to be a trade-off between safety and innovation. So those most excited by the promise of AI technology should still proceed carefully. “You cannot have innovation without safety,” he said. 

Russell’s warning was echoed by some other AI experts in Paris. “We have to have minimum safety standards agreed globally. We need to have these in place before we have a major disaster,” Wendy Hall, director of the Web Science Institute at the University of Southampton, told me. 

您已閱讀25%(1112字),剩餘75%(3405字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。
設置字型大小×
最小
較小
默認
較大
最大
分享×