米兰体育

Skip to content
NOWCAST 米兰体育 13 10p Newscast
Watch on Demand
Advertisement

A 鈥榬ogue employee鈥� was behind Grok鈥檚 unprompted 鈥榳hite genocide鈥� mentions

A 鈥榬ogue employee鈥� was behind Grok鈥檚 unprompted 鈥榳hite genocide鈥� mentions
Chat GP T wowed the world in 2023 with its text predicting ability which many found uncanny to *** point of being even *** little scary. Now, the company behind the chatbot open *** I has announced *** new feature that might make it even more. So chat GP T is now testing *** memory feature, meaning it will soon be able to recall things you have discussed with it and utilize those memories in future conversations. The test will include *** limited roll out with select free and plus paid users soon open *** I says that the feature should improve not only its user utility but also make the bot seem more conversational and depending on the user that could mean different things in their statement, they outlined it for some the memory feature could allow chat GP T to remember how user prefers their formatting of responses. It may also provide more specific context with to responses for *** particular user going forward, like your preferences for travel or even family members. However, if things are getting *** little too creepy for you, users will also have the option to delete individual memories or turn the function off completely. The company also said that they are taking steps to avoid automatically logging sensitive information though they have not outlined how exactly they are doing. So.
Advertisement
A 鈥榬ogue employee鈥� was behind Grok鈥檚 unprompted 鈥榳hite genocide鈥� mentions
Elon Musk鈥檚 artificial intelligence company on Friday said a 鈥渞ogue employee鈥� was behind its chatbot鈥檚 unsolicited rants about 鈥渨hite genocide鈥� in South Africa earlier this week.The clarification comes less than 48 hours after Grok 鈥� the chatbot from Musk鈥檚 xAI that is available through his social media platform, X 鈥� began bombarding users with unfounded genocidal theories in response to queries about completely off-topic subjects.In an X post, the company said the 鈥渦nauthorized modification鈥� in the extremely early morning hours Pacific time pushed the AI-imbued chatbot to 鈥減rovide a specific response on a political topic鈥� that violates xAI鈥檚 policies. The company did not identify the employee.鈥淲e have conducted a thorough investigation and are implementing measures to enhance Grok鈥檚 transparency and reliability,鈥� the company said in the post.To do so, xAI says it will openly publish Grok鈥檚 system prompts on GitHub to ensure more transparency. Additionally, the company says it will install 鈥渃hecks and measures鈥� to make sure xAI employees can鈥檛 alter prompts without preliminary review. And the AI company will also have a monitoring team in place 24/7 to address issues that aren鈥檛 tackled by the automated systems.Nicolas Miailhe, co-founder and chief executive of PRISM Eval 鈥� an AI testing and evaluation start-up 鈥� told CNN that X鈥檚 proposed remedy is a mixed bag.鈥滿ore transparency is generally better on this given the nature of the bot and platform (media),鈥� Miailhe said. 鈥淭hough detailed info about the system prompting can also be used by malicious actors to craft prompt injection attacks.鈥滿usk, who owns xAI and currently serves as a top White House adviser, was born and raised in South Africa and has a history of arguing that a 鈥渨hite genocide鈥� was committed in the nation. The billionaire media mogul has also claimed that white farmers in the country are being discriminated against under land reform policies that the South African government says are aimed at combating apartheid fallout.Less than a week ago, the Trump administration allowed 59 white South Africans to enter the U.S. as refugees, claiming they鈥檇 been discriminated against, while simultaneously also suspending all other refugee resettlement.Per a Grok response to xAI鈥檚 own post, the 鈥渨hite genocide鈥� responses occurred after a 鈥渞ogue employee at xAI tweaked my prompts without permission on May 14,鈥� allowing the AI chatbot to 鈥渟pit out a canned political response that went against xAI鈥檚 values.鈥漀otably, the chatbot declined to take ownership over its actions, saying, 鈥淚 didn鈥檛 do anything 鈥� I was just following the script I was given, like a good AI!鈥� While it鈥檚 true that chatbots鈥� responses are predicated on approved text responses anchored to their code, the dismissive admission emphasizes the danger of AI, both in terms of disseminating harmful information but also in playing down its part in such incidents.When CNN asked Grok why it had shared answers about 鈥渨hite genocide,鈥� the AI chatbot again pointed to the rogue employee, adding that 鈥渕y responses may have been influenced by recent discussions on X or data I was trained on, but I should have stayed on topic.鈥漁ver two years have passed since OpenAI鈥檚 ChatGPT made its splashy debut, opening the floodgates on commercially available AI chatbots. Since then, a litany of other AI chatbots 鈥� including Google鈥檚 Gemini, Anthropic鈥檚 Claude, Perplexity, Mistral鈥檚 Le Chat, and DeepSeek 鈥� have become available to U.S. adults.A recent Gallup poll shows that most Americans are using multiple AI-enabled products weekly, regardless of whether they鈥檙e aware of the fact. But another recent study, this one from the Pew Research Center, shows that only 鈥渙ne-third of U.S. adults say they have ever used an AI chatbot,鈥� while 59% of U.S. adults don鈥檛 think they have much control over AI in their lives.CNN asked xAI whether the 鈥渞ogue employee鈥� has been suspended or terminated, as well as whether the company plans to reveal the employee鈥檚 identity. The company did not respond at the time of publication.

Elon Musk鈥檚 artificial intelligence company on Friday said a 鈥渞ogue employee鈥� was behind its chatbot鈥檚 unsolicited rants about 鈥渨hite genocide鈥� in South Africa earlier this week.

The clarification comes after Grok 鈥� the chatbot from Musk鈥檚 xAI that is available through his social media platform, X 鈥� began bombarding users with unfounded genocidal theories in response to queries about completely off-topic subjects.

Advertisement

In an , the company said the 鈥渦nauthorized modification鈥� in the extremely early morning hours Pacific time pushed the AI-imbued chatbot to 鈥減rovide a specific response on a political topic鈥� that violates xAI鈥檚 policies. The company did not identify the employee.

鈥淲e have conducted a thorough investigation and are implementing measures to enhance Grok鈥檚 transparency and reliability,鈥� the company said in the post.

To do so, xAI says it will openly to ensure more transparency. Additionally, the company says it will install 鈥渃hecks and measures鈥� to make sure xAI employees can鈥檛 alter prompts without preliminary review. And the AI company will also have a monitoring team in place 24/7 to address issues that aren鈥檛 tackled by the automated systems.

Nicolas Miailhe, co-founder and chief executive of PRISM Eval 鈥� an AI testing and evaluation start-up 鈥� told CNN that X鈥檚 proposed remedy is a mixed bag.鈥滿ore transparency is generally better on this given the nature of the bot and platform (media),鈥� Miailhe said. 鈥淭hough detailed info about the system prompting can also be used by malicious actors to craft prompt injection attacks.鈥�

Musk, who owns xAI and currently serves as a top White House adviser, was born and raised in South Africa and has a history of arguing that a 鈥渨hite genocide鈥� was committed in the nation. The billionaire media mogul has also claimed that white farmers in the country are being discriminated against under land reform policies that the South African government says are aimed at combating apartheid fallout.

Less than a week ago, the Trump administration allowed 59 white South Africans to enter the U.S. as refugees, claiming they鈥檇 been discriminated against, while simultaneously also suspending all other refugee resettlement.

Per , the 鈥渨hite genocide鈥� responses occurred after a 鈥渞ogue employee at xAI tweaked my prompts without permission on May 14,鈥� allowing the AI chatbot to 鈥渟pit out a canned political response that went against xAI鈥檚 values.鈥�

Notably, the chatbot declined to take ownership over its actions, saying, 鈥淚 didn鈥檛 do anything 鈥� I was just following the script I was given, like a good AI!鈥� While it鈥檚 true that chatbots鈥� responses are predicated on approved text responses anchored to their code, the dismissive admission emphasizes the danger of AI, both in terms of disseminating harmful information but also in playing down its part in such incidents.

When CNN asked Grok why it had shared answers about 鈥渨hite genocide,鈥� the AI chatbot again pointed to the rogue employee, adding that 鈥渕y responses may have been influenced by recent discussions on X or data I was trained on, but I should have stayed on topic.鈥�

Over two years have passed since OpenAI鈥檚 ChatGPT made its splashy debut, opening the floodgates on commercially available AI chatbots. Since then, a litany of other AI chatbots 鈥� including Google鈥檚 Gemini, Anthropic鈥檚 Claude, Perplexity, Mistral鈥檚 Le Chat, and DeepSeek 鈥� have become available to U.S. adults.

shows that most Americans are using multiple AI-enabled products weekly, regardless of whether they鈥檙e aware of the fact. But another recent study, , shows that only 鈥渙ne-third of U.S. adults say they have ever used an AI chatbot,鈥� while 59% of U.S. adults don鈥檛 think they have much control over AI in their lives.

CNN asked xAI whether the 鈥渞ogue employee鈥� has been suspended or terminated, as well as whether the company plans to reveal the employee鈥檚 identity. The company did not respond at the time of publication.