SCIENTISTS INVENT “SLIGHTLY LESS WRONG” AI TO PREVENT DOCTORS FROM KILLING YOU
In a groundbreaking development that precisely nobody asked for, MIT researchers have developed a way to make artificial intelligence slightly less full of sh!t when it matters most – like when a doctor is squinting at your X-ray trying to determine if that’s cancer or just a Cheeto you inhaled last Tuesday.
COMPUTER NERDS DISCOVER THAT GIVING AI MULTIPLE CHOICE TESTS IS BETTER THAN ASKING FOR ESSAYS
The revolutionary new approach, which took several PhD geniuses and millions in funding to develop, essentially boils down to “maybe don’t trust a single answer from the silicon fortune teller.” Instead, the AI now produces a smaller list of possible diagnoses that might include the correct one, probably, they hope.
“With fewer classes to consider, doctors can more efficiently realize how f@#king clueless both they and the computer are,” explained Dr. Divya Shanmugam, who clearly hasn’t seen enough dystopian sci-fi movies to know where this is heading.
TURNS OUT LOOKING AT SOMETHING MULTIPLE TIMES HELPS YOU NOT BE STUPID
The breakthrough technique, called “test-time augmentation,” involves showing the AI the same image multiple ways – cropped, flipped, zoomed in, or covered in digital sprinkles – because apparently teaching machines to see requires the same techniques used to entertain a colicky infant.
“We basically trick the AI into thinking it’s seen more data by showing it the same damn picture over and over at different angles,” said an unnamed researcher who definitely didn’t want to be associated with this quote. “It’s like when you show a toddler the same episode of Bluey seventeen times and suddenly they’re experts in Australian dog psychology.”
MEDICAL EXPERTS THRILLED TO HAVE NEW EXCUSES FOR MISDIAGNOSIS
The team claims their approach reduces the AI’s prediction set by up to 30 percent, which means instead of suggesting 200 possible species when looking at a picture of a labrador, it might narrow it down to just 140 types of canine, reptile, or possibly kitchen appliance.
“This is revolutionary,” exclaimed Dr. Noah Klue, Chief of Medical Guesswork at Probably Fine General Hospital. “Now instead of telling patients they might have one of 17 diseases, we can confidently narrow it down to just 12 different conditions, each requiring completely contradictory treatments!”
OUTRAGEOUS STATISTICS DEPARTMENT
According to a completely made-up survey we conducted in our break room, 87 percent of doctors already use a similar technique by googling symptoms and then selecting whichever diagnosis won’t get them sued. Meanwhile, 94 percent of patients report they’d “rather die of suspense” than wait for their doctor to sift through AI suggestions while their appendix ruptures.
SILICON-BASED UNCERTAINTY NOW SLIGHTLY MORE CERTAIN THAN HUMAN UNCERTAINTY
Professor Iam Notarealperson, who holds the prestigious Chair of Making Computers Slightly Less Stupid at the Institute of Obvious Research, praised the innovation: “We’ve basically taught machines to say ‘I don’t know, maybe one of these things’ instead of confidently declaring that your lung cancer is actually a hairball. It’s a real breakthrough in technological humility.”
The paper will be presented at the Conference of People Who Stare at Pixels Until They Think They See Patterns in June, where it’s expected to receive the coveted “Well, It’s Better Than Nothing” award.
In related news, medical malpractice lawyers are reportedly developing their own AI that can generate 30% more lawsuits per misdiagnosis, ensuring the delicate balance of the healthcare ecosystem remains intact.