FT商學院

Anthropic makes ‘jailbreak’ advance to stop AI models producing harmful results
Anthropic在「越獄」技術上取得進展,可以阻止AI模型產生有害結果

Leading tech groups including Microsoft and Meta also invest in similar safety systems
包括微軟和Meta在內的領先科技集團也投資於類似的安全系統。

Artificial intelligence start-up Anthropic has demonstrated a new technique to prevent users from eliciting harmful content from its models, as leading tech groups including Microsoft and Meta race to find ways that protect against dangers posed by the cutting-edge technology.

人工智慧新創公司Anthropic展示了一種新技術,能夠防止用戶從其模型中獲取有害內容。包括微軟(Microsoft)和Meta在內的領先科技集團正在競相尋找應對尖端技術帶來危險的方法。

您已閱讀8%(370字),剩餘92%(4549字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。
設置字型大小×
最小
較小
默認
較大
最大
分享×