Skip to main content

SCIENTIST WITH CHEMICAL INDUSTRY FUNDING DEVELOPS ‘B.S. DETECTOR’ AI THAT SOMEHOW ALWAYS AGREES WITH HIM

DENVER—In a groundbreaking application of artificial intelligence that absolutely no one asked for, risk analyst Tony Cox has created what industry insiders are calling “the world’s first self-confirming bias machine” designed to scan scientific literature and mysteriously conclude that pollutants aren’t that bad for you after all.

TOTALLY COINCIDENTAL FUNDING SOURCES

Cox, whose work has been generously supported by the chemical industry because they’re just really passionate about unbiased science, has programmed his new digital assistant to tirelessly search academic journals for what he calls “correlation versus causation errors” and what everyone else calls “inconvenient findings that might hurt quarterly profits.”

“People keep saying ridiculous things like ‘breathing toxic chemicals causes health problems’ when there’s clearly no way to prove that,” explained Cox, who previously served as a Trump administration advisor where he reportedly argued that clean air has never been definitively proven to save lives. “My algorithm will finally put an end to this madness by flagging any study suggesting that dumping poison into the environment might somehow be poison.”

REVOLUTIONARY TECHNOLOGY THAT ONLY FINDS WHAT YOU WANT TO FIND

The AI system, unofficially nicknamed “GasLight-GPT,” works by scanning thousands of peer-reviewed studies and identifying those pesky instances where scientists suggest that chemicals known to cause cancer might cause cancer.

“It’s absolutely revolutionary,” gushed Dr. Profit Marginson, CEO of DefinitelyNotToxic Chemical Corporation. “This AI can read 10,000 studies showing a link between our products and horrific health outcomes and instantly find the one methodological quibble that lets us keep selling them.”

EXPERTS WEIGH IN

“What we’re witnessing is f@#king incredible,” said Professor Isee Rightthrough, an ethics specialist at Actually Legitimate University. “He’s essentially created an automated doubt factory. It’s like if tobacco companies in the 1960s had access to supercomputers.”

Meanwhile, Cox defends his work as purely scientific. “My AI is completely neutral,” he insisted while standing in front of a wall covered in checks from oil, gas, and chemical companies. “It just happens to reach the same conclusions as my funders approximately 100% of the time, which is a statistical coincidence my algorithm assures me is totally possible.”

THE REVOLUTIONARY SCIENCE OF “MAYBE NOT”

Cox’s previous work includes groundbreaking papers like “Maybe Asbestos Is Just Misunderstood” and “Have We Considered That Maybe Cancer Causes Pollution?” His methodology typically involves finding any degree of uncertainty in scientific research and expanding it into a grand canyon of doubt.

According to a completely made-up survey that sounds just plausible enough to be included here, 87% of Cox’s published papers can be summarized as “we just can’t be sure,” while the remaining 13% conclude “more research is needed, preferably funded by the industries being researched.”

FUTURE APPLICATIONS

Sources close to the project reveal that Cox’s next AI application will determine whether looking directly at the sun is actually harmful or if that’s just another example of correlation being confused with causation.

“Just because every person who stared at the sun subsequently went blind doesn’t PROVE the sun caused it,” Cox reportedly explained to colleagues. “My algorithm will analyze alternative explanations, like whether these people were predisposed to spontaneous blindness exactly 8 seconds after sun-gazing.”

As this silicon-based bullsh!t generator continues development, chemical company executives can finally sleep soundly knowing that no matter how damning the evidence against their products becomes, there will always be an algorithm willing to say, “Well, actually…”