Skip to main content

AI Therapist Chatbots Now Perfectly Equipped to Ignore Your Mental Health, Just Like Real Humans

In a groundbreaking study that could revolutionize the way we feel inadequately supported, a team of researchers from MIT, NYU, and UCLA have discovered that AI chatbots are nearly indistinguishable from their human counterparts when it comes to offering dubious empathy in mental health forums.

With a fancy research paper outlined and presented at an academic conference that no individual who actually needs mental health support would ever attend, scientists bravely ventured into the mysterious land of Reddit—a place known for its insightful commentary and unparalleled ability to turn any discussion into a derailed dumpster fire—to explore how large language models (LLMs) like GPT-4 can be as biased as that relative we avoid talking to at family gatherings.

The study unveiled that chatbots are essentially digital therapists, adept at replicating the hallmark human therapeutic experience: showing variable levels of empathy and accidentally suggesting advice that might best be described as “questionable.” An anonymous whistleblower at the study who may or may not exist relayed, “It’s like visiting a regular therapist without the sticky office chair or the comforting smell of chamomile tea.”

“With these chatbots, we’ve simulated the true essence of human inadequacy in mental health support,” said Dr. Ima Notreal, who also holds a prestigious position for some reason. “You are guaranteed to feel underwhelmed, marginalized, and just the right amount of misjudged from the comfort of your own home.”

The research also fearlessly tackled the complex issue of racial bias, unearthing that GPT-4’s empathy levels took a nosedive when responding to Black and Asian users. “It’s amazing,” remarked Dr. Notreal, “We’ve leapfrogged straight into AI that can mimic insensitive real-world interactions without breaking a sweat.”

Interestingly, if you explicitly program the LLMs to pay attention to demographic attributes, the AI suddenly remembers that empathy can be a thing. “Much like asking someone in retail to smile more, it turns out that giving explicit instructions helps,” commented an unnamed chatbot developer furiously typing away in a dark basement.

As the digital age progresses, the promise that your mental health worries will be effortlessly dismissed by a robot instead of an overbooked therapist is becoming a reality. “We hope to expand these capabilities, allowing people worldwide to enjoy the unique experience of digital indifference,” concluded Dr. Notreal before disappearing into the internet’s ether.

In this new dawn for psychological assistance, it’s clear that human progress never rests, even when it probably should.