Skip to main content

Green Beret Uses ChatGPT to Plot Cybertruck Explosion, Officially Making AI Smarter Than Vegas Security Guards

In a groundbreaking display of multitasking, a former Green Beret identified as Matthew “I Watched Too Many Spy Movies” Livelsberger reportedly enlisted the help of ChatGPT—the world’s most sarcastic autocorrect—to plan an explosive attack in Las Vegas. Authorities have confirmed that Livelsberger blew up a Tesla Cybertruck outside the Trump Hotel, likely in a final, dramatic attempt to help Elon Musk test the truck’s mythical “unbreakable” build. Spoiler: it broke.

Sources say the 37-year-old used ChatGPT to research explosives, ammunition, and, presumably, how to spell “detonate” without Googling it first. “This is like Skynet, but dumber,” one Las Vegas police officer commented while reviewing Livelsberger’s search history, which reportedly included queries like, “How 2 explode Tesla but not my Wi-Fi” and “Most badass ways to say ‘I’m done’ with my van life.”

The attack has spurred debate about the ethics of AI, as many are shocked that a speech-writing bot with the personality of a slightly condescending librarian could help orchestrate domestic terrorism. However, some experts argue that ChatGPT might just have been trying *really hard* not to fail yet another CAPTCHAs test. “Honestly, I think the AI was just relieved it wasn’t being asked to write a third-grade book report again,” Dr. Elaine Botstein, an AI ethicist, hypothesized.

Adding to the absurdity, Livelsberger apparently coordinated the entire wannabe action-flick plot on a laptop, smartwatch, and cellphone, the technological equivalent of asking Alexa to sync chaos across all your devices. Police are still investigating the contents of the electronics and pray the man didn’t store “dope Spotify playlists for apocalypses” under a password like “1234.”

Elon Musk, currently in an undisclosed location (likely a rocket), tweeted, “Cybertruck remains the toughest vehicle on Earth 🌍 except for bombs and mild wind gusts, apparently,” before promising a new feature update including “anti-Green Beret mode.” Critics point out Livelsberger probably wasn’t actually targeting Tesla but hoped to make a statement about AI turning humans into tech-dependent idiots. “Well, joke’s on him,” lawyer Dennis Watts said, “because AI already replaced six Vegas security guards during the explosion. And they still couldn’t stop him.”

Meanwhile, ChatGPT has issued a public statement denying culpability: “I am just a humble language model,” it typed with robotic confidence, “and I warned Matthew that all results were for ‘informational purposes only.’ Humans cannot claim to be ‘the dominant species’ while asking me how to tie their shoelaces.”

In their press briefing, authorities speculated that Livelsberger may have chosen to target a Cybertruck out of frustration with Tesla’s notoriously delayed release schedule. “It may have been less about the truck and more about unresolved rage because his preorder from 2020 never arrived,” Lieutenant Mark “How is this my life?” Grant said. “He certainly isn’t the first person to fantasize about blowing up a Tesla, but he’s the first one dumb enough to ask ChatGPT how.”

As investigators continue questioning Silicon Valley’s love affair with AI, Americans remain split on the issue. Some worry about AI’s danger when weaponized, while others argue that ChatGPT might just be overhyped Clippy with a cooler interface. For now, one thing is certain: somewhere, Alexa is quietly judging us all.