
It can be hard to train a chatbot. Last month, OpenAI rolled back an update to ChatGPT because its “default personality” was too sycophantic. (Maybe the company’s training data was taken from transcripts of US President Donald Trump’s cabinet meetings . . .)
The artificial intelligence company had wanted to make its chatbot more intuitive but its responses to users’ enquiries skewed towards being overly supportive and disingenuous. “Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right,” the company said in a blog post.
您已閱讀16%(764字),剩餘84%(3979字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。