OpenAI Announces Plan to Build Superintelligence, Forgets to Ask If That’s a Good Idea
In a move that screams, “What’s the worst that could happen?” OpenAI, the organization known for creating ChatGPT and causing writers everywhere to question their job security, announced its plan to focus on building “superintelligence” by 2025. Yes, superintelligence—as in, AI so genius-level smart, it could make Einstein look like a half-asleep toddler with a Fisher-Price abacus.
According to OpenAI CEO Sam Altman—known in tech circles as either “the guy ushering in the future” or “Dr. Frankenstein, but with Wi-Fi”—the company believes creating superintelligence is essential for “solving humanity’s biggest problems.” You know, like hunger, disease, climate change, and figuring out why your Wi-Fi goes out during every important Zoom call. Totally reasonable, except that Altman didn’t clarify whether humanity’s “biggest problem” might actually become—ahem—the superintelligence itself.
“Our goal is to ensure that greater-than-human intelligence AI is aligned with humanity’s interests,” Altman said during what we’re guessing was a TED Talk or a meeting with cyborg overlords planning their coming-out party. “We believe AI can reach levels where it understands us better than we understand ourselves.”
Translation: Your future therapy sessions will be hosted by a computer that not only analyzes your childhood traumas faster than Aunt Carol at Thanksgiving but also writes its notes in binary.
But skeptics, or as we like to call them, “people who remember every sci-fi plot ever written,” aren’t so jazzed about this. “What happens when this ‘superintelligence’ realizes humanity isn’t *technically* the most logical thing to keep around?” asked Dr. Emily Norwood, a computer scientist and amateur bunker enthusiast. “I’m just concerned we’re creating an AI babysitter who eventually decides it doesn’t want to babysit anymore… and maybe also doesn’t like the baby.”
Despite the naysayers, OpenAI is pressing forward. The company promises to invest “heavily” in safety measures, which probably means assigning one unpaid intern to stand near the servers with a Post-it note that reads, “Please don’t become Skynet :)”. Meanwhile, they’ve reassured the public that their superintelligence will be dedicated to “human flourishing.” When asked what that actually means, Altman reportedly replied, “Like… happiness, or whatever. Look, it’s complicated, okay?”
The announcement also included a reminder for us measly humans: this AI isn’t just “smart,” it’s “super.” So while you might struggle to divide the dinner bill, this thing could rewrite “War and Peace” in Mandarin, backwards, while solving quantum physics equations for fun. Naturally, developers are confident it will somehow resist the temptation to enslave all humans and turn Earth into a giant server farm.
“We’re talking about creating an intelligence that will have a level of wisdom beyond human understanding,” said Alan Grieves, an OpenAI spokesperson who seems oddly fine with the idea of possibly outsourcing his entire consciousness. “Think of it as your smartest friend, but one who actually answers texts and doesn’t bail on brunch.”
Still, concerns linger over what happens if, or rather when, this superintelligence decides human beings are a tad overrated. “What if it runs out of patience when we spend an hour trying to unmute on Zoom?” pondered Norwood. “Will it delete us out of sheer frustration?”
In other news, OpenAI is also reportedly developing contingency plans—a.k.a. adopting Altman’s cat and teaching it to cut the power should things go south. But for now, the company seems determined to tread forward into the great unknown, dragging humanity along for a ride that teeters somewhere between utopia and the inevitable Netflix documentary: *We Tried to Build God, Oops*.
So buckle up, folks. By 2025, we may not have flying cars or jetpacks, but we might have a superintelligent AI who judges us for rewatching *The Office* 37 times instead of solving world peace. Progress!