Skip to main content

ZUCKERBERG TO REPLACE HUMAN SAFEGUARDS WITH SOULLESS CODE THAT CAN’T EVEN TELL A N*PPLE FROM A TERRORIST THREAT

In what experts are calling “digital Russian roulette but with everyone’s mental health,” Mark Zuckerberg’s Meta has reportedly decided that protecting billions of users from harm is a job best left to the same technology that thinks your grandmother is a potential drug dealer.

COMPUTERS TO DECIDE WHAT’S DANGEROUS, BECAUSE THAT’S ALWAYS WORKED OUT GREAT

According to reports that have internet safety campaigners sh!tting digital bricks, Meta plans to automate up to 90% of its risk assessments using artificial intelligence, essentially putting the fox in charge of the henhouse if the fox was made of math and had never actually seen a chicken.

“This is f@#king brilliant,” said Dr. Obvious Sarcasm, head of the Institute for Predictable Disasters. “Let’s have the same technology that recommends cat videos after you watch nuclear physics documentaries determine what content might harm children. What could possibly go wrong?”

THE ALGORITHM WILL SEE YOU NOW

UK watchdog Ofcom is reportedly “considering the concerns” raised by campaigners, which in bureaucratic terms means they’re moving at the approximate speed of continental drift while Meta implements changes faster than you can say “societal collapse.”

The proposed system would replace human moderators, who despite their flaws, at least possess qualities like “empathy” and “understanding context” – traits noticeably absent from the silicon-based thinking rectangles that still can’t reliably distinguish between a sunset and a grapefruit.

EXPERTS PREDICT 47,000% INCREASE IN DIGITAL DUMPSTER FIRES

Professor Idon Tcare, a leading researcher in algorithmic ethics who we completely made up, explained: “Our studies show that AI-based risk assessment is about as reliable as asking a toddler to perform open heart surgery because they’re really good at Operation. According to our calculations, 87.3% of all harmful content will be labeled ‘probably fine’ while 94.6% of your aunt’s cookie recipes will be flagged as ‘potential incitement to violence.'”

Meta spokesperson Clara Fication defended the move, stating: “Humans are expensive and occasionally need to use the bathroom. Our new system never sleeps, never questions its purpose, and can process millions of potentially harmful posts per second with the emotional intelligence of a potato.”

PARLIAMENT CONSIDERS LEGISLATION REQUIRING ACTUAL HUMANS TO GIVE A SH!T

The UK government is reportedly drafting legislation that would require tech companies to employ at least one person who occasionally feels bad when terrible things happen, though industry insiders suggest this will be watered down to “a houseplant named Dave that someone checks on every third Tuesday.”

Meanwhile, Zuckerberg himself was unavailable for comment as he was reportedly busy teaching his AI system to recognize what a human smile looks like by practicing in front of a mirror for fourteen consecutive hours.

At press time, Meta’s prototype AI risk assessment system had already flagged this article as “harmful misinformation” while simultaneously recommending it to children interested in “educational content about robots taking over the world just kidding that would never happen please ignore this ha ha ha.”