MIT Now Offering Philosophy Classes to Help Engineers Feel Less Guilty About Their AI Crimes
CAMBRIDGE, MA – In a groundbreaking move to soothe the existential dread of future tech billionaires, the Massachusetts Institute of Technology has introduced a new course, “Ethics of Computing,” designed to teach students how to sound morally conflicted while continuing to build machines that may or may not ruin society.
Led by Computer Science professor Armando Solar-Lezama and Philosopher Brad Skow, the course bravely asks its students, “How do we make sure that a machine does what we want, and only what we want?” before immediately moving on to more pressing concerns, like who is morally responsible when an autonomous vehicle splatters someone across a crosswalk.
“It’s all about understanding the broader implications,” said Skow, while furiously taking notes on whether students actually cared. “This course isn’t just about doing the right thing, it’s about learning how to justify whatever you’re going to do anyway.”
**Greek Myths and Gold-Plated Excuses**
To illustrate the dangers of unchecked ambition, Solar-Lezama told students the classic story of King Midas—the man who turned everything to gold, only to regret it when his dinner, servants, and presumably his Tinder dates suffered unfortunate consequences. The tale served as a cautionary metaphor for Artificial Intelligence, though sources report it did not prevent a single student from feverishly programming the next global surveillance algorithm on their laptop.
“This stuff is fascinating,” said senior Titus Roesler, who is writing a paper examining whether hitting pedestrians with self-driving cars is really all that bad. “Utilitarianism teaches us that as long as more people benefit overall, a few losses here and there are basically rounding errors.”
**Preparing for a Future Where AI Decides If We Matter**
The semester covers everything from free will to the looming question, “Is the Internet destroying the world?”—a concern met with mixed reviews.
“If we let AI handle everything humans can do, should we pay them salaries?” asked student Alek Westover, who appears to be the only person in the class concerned about fair wages for non-human entities. “How much should we care about what AI wants?” he added, unknowingly submitting himself as a top candidate for the future AI Rights Movement, a group that will definitely be ignored until machines achieve full consciousness and take over JPMorgan Chase.
Meanwhile, Caitlin Ogoe, a self-described “technology skeptic,” took the course to confirm her suspicions that tech companies don’t exactly prioritize ethics. “I started questioning the morality of AI when I saw how exploitative some of these algorithms were,” she said. “Also, when I called a tech support number and realized it was a glorified chatbot reading me Google search results.”
**Nothing to Worry About, Probably**
As the course nears its end, students are required to grapple with high-stakes dilemmas, like whether facial recognition should be allowed to decide who gets arrested based on their cheekbone structure. Most students agree there are “some problems,” before ultimately concluding that “those are for lawmakers to figure out.”
Solar-Lezama remains optimistic, assuring students that in five years, humanity might just laugh at today’s AI concerns. “Or maybe,” he admitted, “we’ll all be under AI rule, endlessly training algorithms that don’t actually need us anymore.”
At press time, MIT was rumored to be developing a follow-up course called “How to Pretend Your Tech Company Cares About Ethics While Cashing Checks.”