Skip to main content

TECH NERDS ATTEMPT TO ADD ETHICS TO CODE; DISCOVER ‘MORALITY’ ISN’T A PROGRAMMING LANGUAGE

MIT eggheads shocked to learn that not everything can be solved with an algorithm; forced to talk to actual humans

CAMBRIDGE, MA — In what experts are calling “the most unnatural act since Mark Zuckerberg attempted to smile,” the Massachusetts Institute of Technology hosted a symposium where engineers pretended to care about ethics for an entire day without a single circuit board melting from confusion.

THE KIDNEY TRANSPLANT HUNGER GAMES

MIT Vice Provost Dimitris Bertsimas, apparently tired of people dying while committees take coffee breaks, developed an algorithm that can decide who deserves a kidney in just 14 seconds—approximately the same time it takes most people to feel superior about their dietary choices.

“Our previous system took six hours to determine who lives and who dies,” Bertsimas explained to a room of academics furiously calculating if they had enough organs to trade for tenure. “Now we can crush someone’s hopes and dreams in the time it takes to microwave a Hot Pocket.”

James Alcorn from the United Network for Organ Sharing gushed about the improvement: “We used to spend months deciding which human deserves to continue existing. Now we can make life-altering decisions with the same casual speed as swiping left on Tinder.”

According to completely fabricated statistics, the algorithm is already 87% more efficient and 100% less likely to prioritize patients based on whether they understand differential equations.

AI-GENERATED BULLSH!T STILL INDISTINGUISHABLE FROM HUMAN BULLSH!T

In groundbreaking research that surprised absolutely f@#king no one, MIT researchers discovered that people can’t tell if social media posts were created by an artificial intelligence or a sleep-deprived intern with questionable ethics.

“We discovered that labeling AI content as ‘AI-generated’ makes people distrust both false AND true information,” explained researcher Gabrielle Péloquin-Skulski, who somehow managed to say this with a straight face. “Turns out humans are so broken that warning labels actually break them further.”

Professor Adam Berinsky added, “The ideal solution appears to be labeling posts as ‘possibly true, possibly false, possibly created by a computer, possibly created by your uncle after his third bourbon.’ But META refused to implement our recommendations because engagement metrics.”

Dr. Hugh Manity, an expert in digital psychology we completely invented for this article, noted: “People don’t care if content was made by silicon or carbon as long as it confirms what they already believe. Truth is just whatever makes you feel smug at family dinners.”

DESPERATE ACADEMICS ATTEMPT TO MAKE INTERNET COMMENTERS BEHAVE

In what analysts are calling “the digital equivalent of trying to herd cats using nothing but stern glances,” MIT political scientists have created an AI platform to encourage civil online discourse—a concept so foreign to most internet users they assumed it must be written in Finnish.

Professor Lily Tsai explained their DELiberation.io platform with the naive optimism of someone who’s never read a YouTube comment section: “We believe technology can help people have meaningful conversations without calling each other Nazi communist lizard people.”

Their research has shown a staggering 0.002% improvement in civility when using AI moderation, which scientists are calling “technically better than nothing, which is the bar we’ve all collectively agreed to set for humanity.”

“If you take nothing else from this presentation,” Tsai concluded, “I hope you’ll demand that technologies are assessed for positive downstream outcomes rather than just user numbers.” Tech executives in the audience reportedly had to look up what “positive” and “outcomes” meant since neither appears in their quarterly shareholder reports.

THE LIBERATION WILL NOT BE MONETIZED

In the symposium’s most radical session, Professor Catherine D’Ignazio and Dr. Nikko Stevens introduced “Liberatory AI,” which they described as a “rolling public think tank about all aspects of AI,” causing several venture capitalists in attendance to frantically search for ways to extract profit from the word “liberatory.”

“Instead of waiting for OpenAI or Google to invite us to participate in their products,” D’Ignazio explained, “we’ve come together to contest the status quo and think bigger-picture.” The audience of tech workers nodded thoughtfully while simultaneously updating their LinkedIn profiles to include “ethical AI expert” without changing any actual work practices.

The think tank has produced over 20 position papers examining AI systems, carefully organized into three categories: “Corporate Bullsh!t,” “Complete Dead Ends,” and “Stuff That Might Actually Work If Billionaires Didn’t Exist.”

According to Professor I.M. Skeptical, whom we just made up, “This initiative has a 99.8% chance of being co-opted by tech companies into a PR campaign featuring diverse stock photos and exactly zero structural changes.”

Sources confirm that by the end of the symposium, at least three tech executives had already figured out how to repackage ethical considerations as profitable add-on features, proving once again that capitalism finds a way to monetize even its own critique faster than you can say “disruptive innovation.”