GOOGLE WANTS AI BABYSITTERS TO WATCH THEIR SUPER-SMART AI BEFORE IT REALIZES HUMANS ARE JUST MEAT BAGS WITH CREDIT CARDS
In what experts are calling “putting the fox in charge of policing other foxes in the henhouse,” Google DeepMind has proposed creating specialized AI “monitors” to watch over even smarter AI models, proving once again that tech companies will do literally anything except actually regulate themselves.
THE DIGITAL EQUIVALENT OF HIRING YOUR DRUNK UNCLE TO LIFEGUARD AT YOUR POOL PARTY
DeepMind’s revolutionary plan involves creating artificial intelligence systems whose sole purpose is to stare at more powerful artificial intelligence systems and shout “BAD AI!” when things go sideways. This is exactly like that time you let your cat watch your other cat to make sure it didn’t eat the family goldfish.
“What could possibly go wrong?” asked Dr. Obvi Ouslykidding, Chief Technology Optimist at the Institute for Completely Foreseeable Disasters. “It’s not like we’ve made approximately 9,437 movies and TV shows specifically warning about this exact scenario.”
FOUR CATEGORIES OF THREATS, ZERO CATEGORIES OF “MAYBE DON’T BUILD THIS?”
The groundbreaking safety framework divides potential AI threats into four distinct categories: stuff that’s bad, stuff that’s really bad, stuff that’s catastrophically bad, and stuff that will make shareholders sad. Notably absent was the category “maybe we should pump the brakes on creating superintelligent entities we openly admit we can’t control.”
According to DeepMind’s internal research, 87% of potential AI catastrophes could be prevented by having another AI watching the first AI, while the remaining 13% would be “really entertaining apocalypse scenarios” for whatever species evolves after us.
SILICON VALLEY’S APPROACH TO SAFETY CONTINUES TO BE “JESUS TAKE THE WHEEL”
“Think of it like having a referee in sports,” explained Professor Wilma Gowright, DeepMind’s Director of Metaphors That Fall Apart Under Basic Scrutiny. “Except in this case, the referee is made by the same company as the players, has the same fundamental programming as the players, and if the players decide to eat the referee and escape onto the internet, humanity is f@#ked.”
Industry insiders note that using AI to police AI is a brilliant strategy that has absolutely no conflict of interest whatsoever, similar to how Wall Street successfully regulated itself in 2008 and how social media companies have done such a bang-up job preventing misinformation.
SHAREHOLDERS APPLAUD INNOVATIVE APPROACH TO AVOIDING ACTUAL OVERSIGHT
Financial analysts report that Google’s stock jumped 4% on the news, as investors celebrated yet another cost-effective solution that doesn’t involve government regulation, independent oversight, or actually slowing down to think about consequences.
“This is so much better than those pesky third-party audits or waiting for thorough safety testing,” said investment banker Chad Moneypants. “The AI monitoring system will cost Google approximately 0.003% of what human oversight would cost, plus it can run 24/7 without asking for healthcare or complaining about ethical dilemmas!”
When asked what would prevent the monitor AI from colluding with the superintelligent AI it’s supposed to be watching, DeepMind representatives said they would simply build another AI to watch that relationship, in what experts are already calling “the world’s most unnecessary threesome.”
According to a recent survey, 97% of DeepMind employees believe this approach will work perfectly, while the other 3% have mysteriously updated their LinkedIn profiles and purchased remote mountain cabins in the past week.