Over the past week, OpenAI’s board went through four CEOs in five days. It accused the original chief executive, Sam Altman, of lying, but later backed down from that and refused to say what that meant. Ninety per cent of the organisation’s staff signed an open letter saying they’d quit if the board didn’t. Silicon Valley was both riveted and horrified. By Wednesday, Altman was back, two of the three external board members had been replaced, and everyone could get some sleep.
It would be easy to say that this chaos showed that both OpenAI’s board and its curious subdivided non-profit and for-profit structure were not fit for purpose. One could also suggest that the external board members did not have the appropriate background or experience to oversee a $90bn company that has been setting the agenda for a hugely important technology breakthrough. One could probably say less polite things too, and all of that might be true, but it would also be incomplete.
As far as we know (and the very fact that I have to say that is also a problem), the underlying conflict inside OpenAI was one that a lot of people have pointed to and indeed made fun of over the past year. OpenAI was created to try to build a machine version of something approximating to human intelligence (so-called “AGI”, or artificial general intelligence). The premise was that this was possible within years rather than decades, and potentially very good but also potentially very dangerous, not just for pedestrian things such as democracy or society but for humanity itself.