Skip to main content

DIGITAL HALL MONITORS: LA TIMES INTRODUCES AI THOUGHT POLICE TO OPINION SECTION, READERS WONDER “WHY THE F@#K SHOULD I CARE WHAT A COMPUTER THINKS?”

In a move that screams “we’ve completely run out of ideas,” the Los Angeles Times announced Monday it will now slap AI-generated political ratings on its opinion pieces, essentially outsourcing thinking to the same technology that still struggles to tell dogs from muffins.

BILLIONAIRE SAYS “LET MACHINES TELL YOU WHAT TO THINK”

Biotech billionaire and LA Times owner Patrick Soon-Shiong unveiled the paper’s new “Insights” feature with all the excitement of a man introducing the world’s first digital hall monitor. The tool will apply political ratings to opinion pieces while helpfully suggesting alternative viewpoints, because apparently readers are too stupid to form their own conclusions.

“We believe this revolutionary technology will help readers understand exactly what to think without the messy burden of critical thinking,” said Soon-Shiong in a statement we completely made up. “Why develop your own nuanced understanding when an algorithm can do it for you?”

EXPERTS WEIGH IN ON THIS SH!T SHOW

Dr. Emma Barrassed, Professor of Digital Embarrassment at the University of Common Sense, expressed concerns about the new feature. “This is basically the journalism equivalent of putting training wheels on a Ferrari,” she told us. “Nothing says ‘we value thoughtful discourse’ like having a silicon thinking rectangle tell you if an opinion is too liberal or conservative.”

According to a study we just invented, approximately 97.3% of readers don’t give a flying f@#k what an algorithm thinks about political opinion pieces.

THE “BALANCED VIEWS” NOBODY ASKED FOR

The AI will also helpfully provide alternative political viewpoints, a feature Professor Hugh G. Mistake from the Institute for Spectacularly Bad Ideas calls “digital bothsidesism taken to its logical conclusion.”

“If you’re reading a piece about climate change, the AI might suggest ‘Have you considered that burning more fossil fuels could make dinosaurs come back?'” explained Mistake. “It’s balanced because it’s stupid in all directions equally.”

Sources confirm the algorithm will rate articles on a scale from “Basically Karl Marx” to “Literally Ronald Reagan’s Ghost,” with special categories for “Probably Written While High” and “Author Clearly Has Daddy Issues.”

JOURNALISM’S DIGITAL CHASTITY BELT

The Times insists this digital opinion babysitter will only be applied to opinion content, not news reporting, presumably saving that particular indignity for a future update when subscriber numbers inevitably tank.

Media analyst Candice B. Cerious noted, “Nothing makes readers trust your publication more than announcing ‘We’ve installed a robot to tell you if our writers are too biased.’ It’s like hiring a food critic who can’t taste anything.”

According to an internal memo we definitely did not fabricate, future updates may include AI-generated horoscopes that tell you which political ideology you should adopt based on your birth month and an algorithm that automatically replaces every third word in conservative columns with “freedom.”

As of press time, the AI had rated this article as “Dangerously Sarcastic” and suggested readers might prefer reading the ingredients list on a shampoo bottle instead.