Skip to main content

MIT Student Heroically Attempts to Prevent Humanity from Accidentally Creating Its Own Doom Machine

In a courageous and utterly noble effort, MIT senior Audrey Lorvo has taken up the Herculean task of making sure the tech industry doesn’t accidentally summon the digital equivalent of an angry, all-knowing deity. Studying computer science, economics, and data science—because why stop at just one impossible challenge—Lorvo is doing her utmost to ensure that AI remains a helpful assistant rather than the stuff of futuristic horror films.

“Making sure AI is safe is incredibly important as we inch closer to artificial general intelligence, or what some call ‘humanity’s last great invention,'” Lorvo said, barely flinching at the possibility of AI surpassing our collective intelligence and deciding to micromanage civilization out of existence.

Lorvo, a Social and Ethical Responsibilities of Computing (SERC) scholar, is deeply concerned about AI’s potential risks, particularly its ability to automate its own research and development—a development that essentially means AI could start programming itself while we all stand around blinking in confusion. In her research group, she works tirelessly to study these risks, presumably while fighting off waves of existential dread.

“If we don’t properly align AI with human values, it could have unpredictable consequences,” she warned, as though human values weren’t themselves a hot mess of contradictions. “We need to develop safeguards, frameworks, and policies—and, ideally, not wait until it’s too late.”

The AI Safety Technical Fellowship, which sounds like a think tank designed specifically to stop Skynet before it happens, has further deepened her understanding of just how quickly humanity is hurtling towards a future where our smart toasters might start making economic policy decisions. Through this program, Lorvo has worked on strategies to help governments regulate AI while allowing research to continue, striking that delicate balance between “technological innovation” and “maybe let’s not put a superintelligent algorithm in charge of nuclear codes.”

But academics are only part of the picture. Lorvo also enjoys weightlifting, perhaps in preparation to physically wrestle control of AI systems if all else fails. She has studied urban planning, international development, and even philosophy, because apparently, saving the world from its own hubris wasn’t quite a full workload.

Lorvo, who is fluent in four languages—making her at least three languages more prepared than most to negotiate with AI overlords—has traveled everywhere from Chile to Madagascar, further expanding her attempts to ensure global technology policies don’t result in unintended apocalyptic side effects.

After graduation, Lorvo plans to continue researching AI safety and governance, proving that some people will look at an unfolding sci-fi nightmare and decide to run towards it with a notepad instead of away from it while screaming.

“Good AI governance is crucial to our future,” she said, “because if we mess this up, well, let’s just say we won’t be around to complain about it.”

Humanity, if you’re reading this—please, for the love of all that is good, let this woman and others like her do their jobs before AI starts issuing pink slips to the human race.