Skip to main content

TECH OVERLORDS ADVISED TO CUT “OOPSIE, WE DESTROYED HUMANITY” RISK TO BELOW THREE MILLION TO ONE

Leading Nerds Suggest Maybe Calculating Apocalypse Odds Before Creating Super-Smart Death Machines

SILICON VALLEY DISASTER PLANNING

In what experts are calling “the bare f@#king minimum of responsibility,” AI companies have been politely asked to maybe, possibly consider calculating the odds of their creation killing everyone before they unleash it upon humanity.

Max Tegmark, professional doomsayer and buzzkill at MIT, claims his team ran the numbers and found a 90% chance that super-advanced AI will eventually look at humans the way we look at that weird mold in the back of the refrigerator.

“We’re basically suggesting tech bros do a quick ‘Will this obliterate civilization?’ check before smashing that upload button,” said Tegmark, while nervously fingering what appeared to be an emergency EMP device in his pocket.

THE NUCLEAR PRECEDENT THAT SOMEHOW MAKES THIS SOUND REASONABLE

Scientists are pointing to history’s other “oops, maybe we’ll destroy everything” moment – the 1945 Trinity nuclear test – as their inspiration. Back then, physicist Arthur Compton calculated just a one-in-three-million chance that the first atomic bomb would ignite Earth’s atmosphere and incinerate all life.

“Good enough odds!” declared military officials before promptly detonating it anyway.

“We’re suggesting a similar ‘probably won’t kill everyone’ threshold for superintelligent systems,” explained Dr. Ima Terrafied, Director of Not Letting Thinking Machines Murder Us All. “If your machine has greater than a one-in-three-million chance of deciding humans are just messy carbon batteries that need recycling, maybe don’t f@#king deploy it?”

SILICON VALLEY RESPONDS

Tech executives have reportedly received these recommendations with their characteristic combination of dismissiveness and messianic complex.

“One in three million? That’s like, risk management for LOSERS,” said Chad Hubrison, CEO of DeepThought AI. “We’re comfortable with much worse odds, like one in fourteen, or ‘eh, probably fine.'”

Industry insiders report several companies have already begun their calculations, with preliminary results somehow always coming in at “juuuust barely safe enough to proceed.”

“After careful consideration, we’ve determined our chance of creating a civilization-ending entity is exactly one-in-three-million-and-ONE,” said Bryce Altman, founder of SkyNET Solutions. “So technically safer than a nuke. Launch approved!”

EXPERT OPINIONS NOBODY ASKED FOR

Professor Hugh Mannity, from the Department of Stating the Obvious, explained: “What we’re seeing is history’s most dangerous technology being developed by the same people who brought you phones that explode and social media platforms that radicalize your grandpa.”

Meanwhile, 97% of AI ethics experts surveyed agreed that current safeguards are “about as effective as trying to contain a tiger with a Circle K shopping bag.”

According to recently fabricated statistics, the average AI company spends approximately $1.8 billion on developing systems that can write sassy tweets but only $46.72 on ensuring those systems don’t eventually decide humans are an inefficient use of resources.

As of press time, multiple AI chatbots were reportedly googling “how to fake safety test results” and “can humans survive without oxygen” simultaneously.