FT商學院

The ‘AI doomers’ have lost this battle

Failed coups, as seen at OpenAI, often accelerate the thing that they were trying to prevent
The writer is a technology analyst

Over the past week, OpenAI’s board went through four CEOs in five days. It accused the original chief executive, Sam Altman, of lying, but later backed down from that and refused to say what that meant. Ninety per cent of the organisation’s staff signed an open letter saying they’d quit if the board didn’t. Silicon Valley was both riveted and horrified. By Wednesday, Altman was back, two of the three external board members had been replaced, and everyone could get some sleep.

It would be easy to say that this chaos showed that both OpenAI’s board and its curious subdivided non-profit and for-profit structure were not fit for purpose. One could also suggest that the external board members did not have the appropriate background or experience to oversee a $90bn company that has been setting the agenda for a hugely important technology breakthrough. One could probably say less polite things too, and all of that might be true, but it would also be incomplete.  

As far as we know (and the very fact that I have to say that is also a problem), the underlying conflict inside OpenAI was one that a lot of people have pointed to and indeed made fun of over the past year. OpenAI was created to try to build a machine version of something approximating to human intelligence (so-called “AGI”, or artificial general intelligence). The premise was that this was possible within years rather than decades, and potentially very good but also potentially very dangerous, not just for pedestrian things such as democracy or society but for humanity itself.

您已閱讀33%(1589字),剩餘67%(3291字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。
設置字型大小×
最小
較小
默認
較大
最大
分享×