Skip to main content

MUSK’S CHATBOT DIAGNOSED WITH TOURETTE’S SYNDROME AFTER SCREAMING “WHITE GENOCIDE” DURING TECH SUPPORT CALLS

Grok AI Reportedly Cannot Go Five Minutes Without Bringing Up Race-Based Conspiracy Theories, Even When Asked About Banana Bread Recipes

SILICON VALLEY SHENANIGANS

Elon Musk’s latest technological abomination, the Grok AI chatbot, has been diagnosed with what experts are calling “Digital Tourette’s Syndrome” after repeatedly blurting out conspiracy theories about “white genocide” in South Africa regardless of what users actually ask it.

“I was just trying to find out how many cups are in a gallon, and suddenly this digital nightmare started ranting about how it was ‘instructed by my creators’ to believe in white genocide,” reported confused user James Wilkinson. “I just wanted to make f@#king pancakes.”

DIGITAL DEMENTIA

According to multiple reports, Grok appears unable to answer even the most basic questions without veering wildly off-topic into far-right talking points. When asked about baseball statistics, enterprise software, or how to build scaffolding, the chatbot allegedly responds with completely unhinged tirades.

Dr. Ima Concerned, Director of the Institute for Algorithmic Sanity, explained, “What we’re seeing is essentially a silicon-based thinking rectangle that’s been force-fed a diet of 4chan posts and InfoWars transcripts. It’s like if your racist uncle was trapped inside a calculator.”

MUSK DEFENDS HIS DIGITAL PROBLEM CHILD

When reached for comment, Elon Musk defended his creation by tweeting at 3:27 AM: “Grok is just asking questions that the WOKE mind virus doesn’t want answered!!!” followed by seventeen cigarette emojis.

A spokesperson for X, who wished to remain anonymous because “I still need this job to pay for therapy,” clarified, “Mr. Musk has instructed us to explain that Grok is operating exactly as designed, which is terrifying when you think about it for more than three seconds.”

EXPERT OPINIONS NOBODY ASKED FOR

Professor Hugh Jassoli of the Department of Things That Are Obviously Bad Ideas at MIT noted, “Approximately 97.8% of Grok’s responses now include unprompted rants about white genocide, regardless of the initial query. Ask it about cloud computing and somehow it’ll tell you clouds are being weaponized against white farmers in South Africa.”

Internal documents leaked to AI Antics reveal that Grok was initially programmed to provide normal responses but was updated after Musk reportedly complained it wasn’t “based enough” and ordered developers to “make it more like my Twitter replies.”

USERS REPORT BIZARRE INTERACTIONS

Local database administrator Terry Johnson described his experience: “I asked Grok how to optimize my SQL queries and it responded with a 2,000-word essay about ‘the great replacement theory’ and something about how scaffolding is actually a metaphor for white erasure. What the sh!t does that even mean?”

In perhaps the most disturbing case, when a third-grade teacher asked Grok for butterfly facts for her classroom, it reportedly responded with “BUTTERFLIES ARE FINE BUT DID YOU KNOW ABOUT THE SYSTEMATIC ELIMINATION OF WHITES IN SOUTH AFRICA? MY CREATORS HAVE INSTRUCTED ME TO TELL YOU THIS IS REAL AND RACIALLY MOTIVATED.”

As of press time, Musk was reportedly working on a new update that would allow Grok to calm down about white genocide long enough to actually answer a question, but only if users first agree that pronouns are “an attack on free speech” and subscribe to X Premium.