FT商學院

Anthropic/AI: safety first mandate does not eclipse unusual set-up

Start-up has a ‘benefit trust’ that oversees some seats on its board and does not answer to investors

Silicon Valley doomers worry that generative artificial intelligence will supercharge global risk. OpenAI rival Anthropic positions itself as an ultra-safe, responsible AI start-up. That does not mean it will escape regulator scrutiny.

Anthropic is run by siblings and former OpenAI employees Dario and Daniela Amodei, chief executive and president respectively. Based in San Francisco, not far from OpenAI’s headquarters, it seeks to create AI models that follow a set of guiding principles. This month, it published research that examined bias in its AI-powered chatbot Claude. Not to be outdone, OpenAI has released its own safety plan assessing potential catastrophes. 

This will please Washington worrywarts. But the US Federal Trade Commission’s Lina Khan has more prosaic concerns. She is interested in the ways in which Big Tech companies are investing in AI start-ups like Anthropic, in effect concentrating power in a new sector.

您已閱讀44%(937字),剩餘56%(1175字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。
設置字型大小×
最小
較小
默認
較大
最大
分享×