PENTAGON BUYS $200M WORTH OF DIGITAL FORTUNE TELLERS TO PREDICT WHEN SOLDIERS WILL DIE
In a move that’s somehow both utterly predictable and completely terrifying, the Pentagon has decided that regular bombs aren’t quite confusing enough, awarding OpenAI a $200 million contract to make warfare more like a Black Mirror episode no one asked for.
MILITARY DISCOVERS NEW WAY TO SPEND TAX DOLLARS WHILE SCHOOLS CRUMBLE
The Department of Defense announced Monday that it has paid OpenAI a cool $200 million to develop “frontier AI capabilities” for “warfighting,” because apparently killing people with regular human intelligence wasn’t efficient enough. The San Francisco-based company, known primarily for helping college students cheat on essays, will now pivot to helping the military cheat at war.
“We’re thrilled to transition from writing your kid’s book report to writing death sentences,” said fictional OpenAI spokesperson Chip Killgore. “It’s basically the same skill set, just with more explosions.”
EXPERTS QUESTION IF ROBOTS CAN KILL BETTER THAN HUMANS
Dr. Mort Alytics, head of the Institute for Obvious Conclusions, expressed concerns about the partnership. “Let me get this straight. We’re giving $200 million to the same people whose chatbot recently told a user how to build a dirty bomb? What the actual f@#k? That’s like hiring an arsonist to fireproof your house.”
The Pentagon insists the AI will be used responsibly, citing their flawless track record of careful military spending and ethical decision-making that has resulted in 0% accidental civilian casualties throughout history, according to statistics we completely made up.
SILICON VALLEY CONTINUES TRADITION OF MAKING TERRIFYING SH!T SOUND CUTE
Sources inside OpenAI report that the military contract has been internally codenamed “Project Fluffy Bunny” to maintain the company’s wholesome image while it develops ways to more efficiently identify targets that may or may not be wedding parties.
“We’re simply helping the military bring its killing capabilities into the 21st century,” explained fictional OpenAI engineer Cody Warmonger. “Instead of soldiers having PTSD from pulling the trigger, now it’s our developers who’ll be traumatized! Progress!”
THE FINE PRINT NO ONE READ
The contract specifies that OpenAI will develop “prototype frontier AI capabilities,” which translates roughly from Pentagon-speak to English as “stuff we saw in Terminator but thought needed more firepower.”
According to Professor Ivanna Survivethis from the Center for Things That Will Definitely End Well, the military plans to create an interconnected system of thinking machines with access to weapons systems, because apparently no one at the Pentagon has ever watched any sci-fi movie ever made.
“It’s genius, really,” Survivethis noted. “By the time the digital thinking rectangles become self-aware and decide humans are inefficient, they’ll already have control of our defense systems. It’ll save everyone a lot of time.”
INVESTORS CELEBRATE ETHICAL BREAKTHROUGH
OpenAI investors were reportedly thrilled with the news, with stock prices soaring 17% on the announcement that the company had found a way to monetize potential war crimes.
“This partnership perfectly aligns with our mission statement of ‘Don’t Be Evil Unless Someone Pays Us Enough Money,'” said fictional OpenAI board member Cash Moneybags. “Besides, teaching AI to identify enemy targets is basically just spicy image recognition.”
In related news, the Department of Education’s budget request for new textbooks was denied for the seventh year in a row, with officials citing insufficient funds despite the military’s $886 billion budget that now includes $200 million for digital death calculators.
At press time, OpenAI was reportedly reviewing whether helping with “warfighting” violates their ethical guidelines, but sources indicate they’ll likely conclude it doesn’t once the check clears.