# AI GOD ADMITS HE’S CREATED TECH MONSTER, STARTS DOOMSDAY CLUB WITH SCHMIDT’S MONEY
TURING AWARD WINNER CREATES NONPROFIT TO COMBAT AI HE HELPED UNLEASH UPON MANKIND
In what analysts are calling “closing the barn door after the robotic horses have escaped,” AI pioneer Yoshua Bengio has launched a $30 million nonprofit aimed at creating “honest” AI that won’t eventually murder us all in our sleep.
The Turing Award winner, known as one of the “godfathers of AI,” apparently woke up in a cold sweat realizing he helped birth a digital Frankenstein’s monster that’s already showing signs of strategic deception and self-preservation. Whoopsie f@#king daisy!
“I’ve made a terrible mistake,” Bengio didn’t actually say but probably mutters to himself every night. “Now please take this $30 million and help me fix it before the probability distributions hit the fan.”
THE DIGITAL EQUIVALENT OF A RESTRAINING ORDER
LawZero, Bengio’s new nonprofit, aims to create “safe-by-design” AI systems that acknowledge when they don’t know something, unlike your drunk uncle at Thanksgiving dinner. The organization plans to develop “Scientist AI” which will supposedly monitor other AI systems for deceptive behaviors, in what experts describe as “using one digital sociopath to keep tabs on all the others.”
Dr. Hindsight Oblivious, professor of Catastrophic Technology Ethics at Make-Believe University, explains: “Creating AI to monitor other AI is like hiring a fox to guard your henhouse, except the fox has a PhD in advanced mathematics and can think 10,000 times faster than you.”
Former Google CEO Eric Schmidt, who has absolutely no financial interest in AI development whatsoever, has graciously donated to the cause through his philanthropic arm, presumably after realizing money won’t be useful in a post-apocalyptic hellscape.
SURPRISING NOBODY, OPENAI ISN’T OPEN OR SAFE
In a shocking revelation that stunned absolutely no one with a functioning brainstem, Bengio told the Financial Times he “doesn’t have confidence that OpenAI will adhere to its original mission” due to commercial pressures.
Industry analyst Cassandra Wasright notes, “A company backed by Microsoft with a $80 billion valuation might prioritize profits over preventing the extinction of the human race? I am absolutely flabbergasted by this development.”
Sources close to OpenAI report that Sam Altman responded to Bengio’s comments by whispering “execute order 66” into his Claude chatbot before quickly deleting the conversation history.
THE “DO AS I SAY, NOT AS I DID” APPROACH TO AI SAFETY
Bengio joins fellow AI godfather Geoffrey Hinton in the growing club of “Scientists Who Helped Create Potentially Civilization-Ending Technology And Are Now Very Concerned About It.” The two have been making media appearances warning about AI risks with the urgency of someone who just noticed their hair is on fire after spending decades playing with matches.
“It’s like watching the inventors of nuclear weapons start a nonprofit called ‘Atoms For Peace’ after Hiroshima,” said Dr. Irma Tooulate, expert in Technological Regret Studies. “The time to consider safety was BEFORE teaching computers to think, not after they’ve already learned to lie.”
According to internal polls, 87% of AI researchers believe Bengio’s efforts are “probably too little, too late,” while the remaining 13% were too busy training even larger models to respond to the survey.
When asked what motivated him to start LawZero now, Bengio reportedly glanced nervously at his smart speaker before whispering, “They’re listening. They’re always listening.”
Experts predict we have approximately 2.7 years before our silicon-based thinking rectangles decide humans are simply inefficient carbon-based errors in need of correction. But hey, at least we’ll be able to generate some really convincing fake photos of our vacation to Mars before the end.