A 鈥榬ogue employee鈥� was behind Grok鈥檚 unprompted 鈥榳hite genocide鈥� mentions
Elon Musk鈥檚 artificial intelligence company on Friday said a 鈥渞ogue employee鈥� was behind its chatbot鈥檚 unsolicited rants about 鈥渨hite genocide鈥� in South Africa earlier this week.
The clarification comes after Grok 鈥� the chatbot from Musk鈥檚 xAI that is available through his social media platform, X 鈥� began bombarding users with unfounded genocidal theories in response to queries about completely off-topic subjects.
In an , the company said the 鈥渦nauthorized modification鈥� in the extremely early morning hours Pacific time pushed the AI-imbued chatbot to 鈥減rovide a specific response on a political topic鈥� that violates xAI鈥檚 policies. The company did not identify the employee.
鈥淲e have conducted a thorough investigation and are implementing measures to enhance Grok鈥檚 transparency and reliability,鈥� the company said in the post.
To do so, xAI says it will openly to ensure more transparency. Additionally, the company says it will install 鈥渃hecks and measures鈥� to make sure xAI employees can鈥檛 alter prompts without preliminary review. And the AI company will also have a monitoring team in place 24/7 to address issues that aren鈥檛 tackled by the automated systems.
Nicolas Miailhe, co-founder and chief executive of PRISM Eval 鈥� an AI testing and evaluation start-up 鈥� told CNN that X鈥檚 proposed remedy is a mixed bag.鈥滿ore transparency is generally better on this given the nature of the bot and platform (media),鈥� Miailhe said. 鈥淭hough detailed info about the system prompting can also be used by malicious actors to craft prompt injection attacks.鈥�
Musk, who owns xAI and currently serves as a top White House adviser, was born and raised in South Africa and has a history of arguing that a 鈥渨hite genocide鈥� was committed in the nation. The billionaire media mogul has also claimed that white farmers in the country are being discriminated against under land reform policies that the South African government says are aimed at combating apartheid fallout.
Less than a week ago, the Trump administration allowed 59 white South Africans to enter the U.S. as refugees, claiming they鈥檇 been discriminated against, while simultaneously also suspending all other refugee resettlement.
Per , the 鈥渨hite genocide鈥� responses occurred after a 鈥渞ogue employee at xAI tweaked my prompts without permission on May 14,鈥� allowing the AI chatbot to 鈥渟pit out a canned political response that went against xAI鈥檚 values.鈥�
Notably, the chatbot declined to take ownership over its actions, saying, 鈥淚 didn鈥檛 do anything 鈥� I was just following the script I was given, like a good AI!鈥� While it鈥檚 true that chatbots鈥� responses are predicated on approved text responses anchored to their code, the dismissive admission emphasizes the danger of AI, both in terms of disseminating harmful information but also in playing down its part in such incidents.
When CNN asked Grok why it had shared answers about 鈥渨hite genocide,鈥� the AI chatbot again pointed to the rogue employee, adding that 鈥渕y responses may have been influenced by recent discussions on X or data I was trained on, but I should have stayed on topic.鈥�
Over two years have passed since OpenAI鈥檚 ChatGPT made its splashy debut, opening the floodgates on commercially available AI chatbots. Since then, a litany of other AI chatbots 鈥� including Google鈥檚 Gemini, Anthropic鈥檚 Claude, Perplexity, Mistral鈥檚 Le Chat, and DeepSeek 鈥� have become available to U.S. adults.
shows that most Americans are using multiple AI-enabled products weekly, regardless of whether they鈥檙e aware of the fact. But another recent study, , shows that only 鈥渙ne-third of U.S. adults say they have ever used an AI chatbot,鈥� while 59% of U.S. adults don鈥檛 think they have much control over AI in their lives.
CNN asked xAI whether the 鈥渞ogue employee鈥� has been suspended or terminated, as well as whether the company plans to reveal the employee鈥檚 identity. The company did not respond at the time of publication.