Elon Musk’s artificial intelligence chatbot repeatedly referenced race relations in South Africa to users on X in response to unrelated questions, raising concerns about the reliability of a model used by millions.
In answers provided to dozens of users on Wednesday, X’s AI chatbot Grok cited “white genocide” in South Africa, as well as the anti-apartheid chant “Kill the Boer”. The original queries were completely unrelated to the topics. Grok shares context with users on X when they tag the chatbot underneath a post.
The apparent glitch happened for a brief period and seemed to have been fixed by Wednesday afternoon, but will raise questions about the accuracy of Musk’s AI model and its ability to spread false or inflammatory theories.