The writer is a science commentator Move along, not much to see here. That seemed to be the message from OpenAI last week, about an experiment to see whether its advanced AI chatbot GPT-4 could help science-savvy individuals make and release a biological weapon.
The chatbot “provided at most a mild uplift” to those efforts, OpenAI announced, though it added that more work on the subject was urgently needed. Headlines reprised the comforting conclusion that the large language model was not a terrorist’s cookbook.
Dig deeper into the research, however, and things look a little less reassuring. At almost every stage of the imagined process, from sourcing a biological agent to scaling it up and releasing it, participants armed with GPT-4 were able to inch closer to their villainous goal than rivals using the internet alone.