人工智慧

OpenAI acknowledges new models increase risk of misuse to create bioweapons

Company unveils o1 models that it claims have new reasoning and problem-solving abilities

OpenAI’s latest models have “meaningfully” increased the risk that artificial intelligence will be misused to create biological weapons, the company has acknowledged.

The San Francisco-based company announced its new models, known as o1, on Thursday, touting their new abilities to reason, solve hard maths problems and answer scientific research questions. These advances are seen as a crucial breakthrough in the effort to create artificial general intelligence — machines with human-level cognition.

OpenAI’s system card, a tool to explain how the AI operates, said the new models had a “medium risk” for issues related to chemical, biological, radiological and nuclear (CBRN) weapons — the highest risk that OpenAI has ever given for its models. The company said it meant that the technology has “meaningfully improved” the ability of experts to create bioweapons.

您已閱讀31%(868字),剩餘69%(1957字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。
設置字型大小×
最小
較小
默認
較大
最大
分享×