科研

The threat of catastrophes calls for radical tech regulation

It is tempting to believe that we must have reached peak chaos, given the insanities of Brexit politics and the inanities of President Donald Trump. But we can cheer ourselves up this festive season by imagining the ways in which things could be so much worse.

During the past few years, a sprinkling of institutes has sprung up in UK and US universities with the explicit aim of researching existential risks to our species. Catastrophic climate change, nuclear war, pandemics, a rogue superintelligence and alien invasion are just some of the scary scenarios explored by these academic doomsters. Many of these threats are outlined in a disturbingly eloquent book, On The Future, written by Martin Rees, one of Britain’s most eminent scientists, who helped set up the Centre for the Study of Existential Risk at Cambridge university. The author’s contention is that the stakes have never been higher for humanity: we have reached such a level of technological capability that we now possess the power to destroy our planet by mistake and must pursue more responsible innovation.

Yet we exhibit no sense of urgency about many of these potential dangers. If we knew that there was a 10 per cent probability that an asteroid might crash into earth in 2100 then we would mobilise every resource to save our descendants. But we remain alarmingly insouciant about the threat of global warming or genetic engineering given the risks seem more nebulous.

您已閱讀31%(1445字),剩餘69%(3245字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。
設置字型大小×
最小
較小
默認
較大
最大
分享×