Microsoft, OpenAI, Google and Anthropic have stepped up a united push towards safety standards for artificial intelligence and appointed a director as their alliance seeks to fill “a gap” in global regulation.
The four tech giants, which this summer banded together to form the Frontier Model Forum, on Wednesday picked Chris Meserole from the Brookings Institution to be executive director of the group. The forum also divulged plans to commit $10mn to an AI safety fund.
“We’re probably a little ways away from there actually being regulation,” Meserole, who is stepping down from his role as an AI director at the Washington-based think-tank, told the Financial Times. “In the meantime, we want to make sure that these systems are being built as safely as possible.”