FT商學院

There can be no AI regulation without corporate transparency
沒有透明度,AI監管就無從談起

AI companies are growing ever more secretive as their power and profiles blossom
沙克:人工智慧公司的實力越來越強大,但它們對數據和演算法的保密程度也越來越高,這使得研究人員無法評估其模型的安全性。

The writer is international policy director at Stanford University’s Cyber Policy Center and special adviser to the European CommissionHardly a day goes by without a new proposal on how to regulate AI: research bodies, safety agencies, an idea from the International Atomic Energy Agency branded ‘IAEA for AI’ . . . the list keeps growing. All these suggestions reflect an urgent desire to do something, even if there is no consensus on what that “something” should be. There is certainly a lot at stake, from employment and discrimination to national security and democracy. But can political leaders actually develop the necessary policies when they know so little about AI?

本文作者是史丹佛大學(Stanford University)網路政策中心(Cyber policy Center)國際政策主任、歐盟委員會(European Commission)特別顧問

您已閱讀12%(771字),剩餘88%(5690字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。
設置字型大小×
最小
較小
默認
較大
最大
分享×