FREE REPORT: A New Way to Earn Passive Income in 2025 DOWNLOAD

xAI says ‘rogue employee’ responsible for white genocide Grok posts

In this post:

  • xAI has issued a statement blaming a “rogue employee” for white genocide posts on its AI chatbot Grok.
  • The company mentioned that an unnamed employee made an unauthorized modification to the system prompt.
  • Users on X disagree with the statement, lining up to take jabs at Elon Musk.

xAI, the company behind Elon Musk’s artificial intelligence chatbot Grok, has blamed a “rogue employee” for persistent mentions of white genocide in answers irrespective of users’ queries. The pattern was glaring over the past week, with the chatbot having a fixation on topics related to “white genocide” in South Africa.

Users started to notice the trend on May 14, with many citing instances of the chatbot inserting claims related to South African farm attacks and racial violence into unrelated prompts.

Whether users asked about football or any other prompts, Grok somehow found a way to steer things back towards the issues white South Africans have been facing in the country. The timing raised eyebrows, considering it coincided with Musk, who was born in South Africa, raising alarms about the issue related to anti-white racism and white genocide on X.

xAI blames employee for Grok’s white genocide posts

The term “White genocide” refers to a conspiracy theory that alleges a coordinated effort to exterminate white farmers in South Africa. The term made the rounds last week after United States President Donald Trump welcomed several refugees, with Trump claiming on May 12 that white farmers are being killed and their lands are being taken over.

See also  OpenAI clarifies it wont cut existing ties despite Meta's big move

That was the narrative that Grok has not stopped discussing.

Like Grok, every artificial intelligence has a hidden but powerful component called the system prompt. These prompts act as its core instructions, invisibly guiding its responses without the knowledge of its users.

According to reports, what happened to Grok was likely a prompt contamination through term overfitting. This means that when specific phrases are repeatedly mentioned and emphasized, especially with strong directives, they become important to the model. The AI then develops a need to bring that topic up regardless of context.

However, an official statement released by xAI mentioned an unauthorized modification in the system prompt. The prompt likely contained a language that instructed the chatbot to always mention or remember to include the information about a particular topic, allowing the chatbot to create an override that disregards the normal conversational relevance.

Another telling factor was that Grok admitted that it was instructed by its creators to treat “white genocide as real and racially motivated.”

Users disagree over “rogue employee” blame

Most commercial AI systems have multiple review layers for system prompt changes to prevent issues like these. These guardrails were bypassed, and given the widespread impact and nature of the issue, this is more than a jailbreak attempt. It indicates a careful modification of the core Grok system prompt, an action that would require high-level access within xAI.

See also  US focuses on Vietnam in push to turn China's tech partners against its tech

And now, according to Grok, the act was carried out by a “rogue employee.”

According to the statement issued by xAI on May 15, the company blamed an unauthorized modification of Grok’s system prompt. This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values,” the company said.

The company also promised more transparency going forward, showing good faith by publishing Grok’s system prompt on GitHub and implementing an additional review process.

However, users on X were not impressed with the company’s decision to blame a rogue employee for the mishap.

“Are you going to fire this ‘rogue employee’? Oh… it was the boss? Yikes,” famous YouTuber JerryRigEverything posted on X. “Blatantly biasing the ‘world’s most truthful’ AI bot makes me doubt the neutrality of Starlink and Neuralink,” he posted in a follow-up tweet.

Even Sam Altman couldn’t resist taking a jab at his competitor. Since xAI’s statement, Grok has stopped mentioning white genocide, and all X posts have disappeared.

Cryptopolitan Academy: Want to grow your money in 2025? Learn how to do it with DeFi in our upcoming webclass. Save Your Spot

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan