人工智慧

AI’s bioterrorism potential should not be ruled out

Risk evaluation of the technology cannot be left to the industry alone

The writer is a science commentator Move along, not much to see here. That seemed to be the message from OpenAI last week, about an experiment to see whether its advanced AI chatbot GPT-4 could help science-savvy individuals make and release a biological weapon.

The chatbot “provided at most a mild uplift” to those efforts, OpenAI announced, though it added that more work on the subject was urgently needed. Headlines reprised the comforting conclusion that the large language model was not a terrorist’s cookbook.

Dig deeper into the research, however, and things look a little less reassuring. At almost every stage of the imagined process, from sourcing a biological agent to scaling it up and releasing it, participants armed with GPT-4 were able to inch closer to their villainous goal than rivals using the internet alone.

您已閱讀18%(829字),剩餘82%(3800字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。
設置字型大小×
最小
較小
默認
較大
最大
分享×