Skip to main content

LAWYER WHO USED CHATGPT FOR LEGAL BRIEF CLAIMS “THE ROBOT MADE ME DO IT”

In a stunning display of professional laziness that would make even government employees blush, Utah attorney Richard Bednar has been sanctioned after being caught using ChatGPT to write a court brief, complete with cases that don’t f@#king exist.

THE IMAGINARY JUSTICE LEAGUE

Bednar’s brief cited the landmark case “People v. Common Sense,” which legal scholars confirm is about as real as your girlfriend from summer camp who “goes to another school.” The Utah appeals court was particularly impressed by his reference to the precedent-setting “Reynolds v. Reality,” authored by the distinguished Justice Figment of Imagination.

“I just asked the glowing text rectangle to ‘make me sound smart’ and hit submit,” Bednar reportedly told colleagues while frantically updating his LinkedIn profile to “Former Attorney.”

LEGAL EXPERTS WEIGH IN

“This is unprecedented stupidity,” explained Professor Bartha Simpleton of Obvious University. “Lawyers typically prefer to make up facts the old-fashioned way, by misinterpreting real cases until they’re unrecognizable.”

Legal ethics expert Dr. Sue Yurass added, “When I went to law school, we had to fabricate our own bulls#!t citations by hand, uphill both ways. These kids today with their digital hallucinations have it too easy.”

SHOCKING STATISTICS

A recent survey found that 87% of legal briefs now contain at least one citation that was pulled directly from an attorney’s @ss. However, only 23% are stupid enough to use technology that leaves a digital trail.

BEDNAR’S DEFENSE STRATEGY

When confronted by the court, Bednar allegedly attempted to blame his “summer associate” before remembering he’s a solo practitioner. He then pivoted to his actual defense: “I thought ChatGPT was just a really smart paralegal named Chad.”

The Utah Bar Association has proposed a new continuing education requirement titled “Google Exists, You Dumb@ss” where attorneys learn that judges can, in fact, verify whether cases are real.

FUTURE IMPLICATIONS

Legal technology experts predict this won’t be the last case of AI-assisted malpractice. “We’re seeing an alarming trend of professionals outsourcing their thinking to keyboard-activated hallucination engines,” noted tech analyst Mike Hunt. “Next thing you know, surgeons will be asking Siri which organ to remove.”

In his apology to the court, Bednar promised to “do better” and has reportedly downgraded to using Magic 8 Balls for all future legal research. When asked for comment, ChatGPT responded with a detailed analysis of Bednar v. Common Sense, another case that absolutely does not exist.

At press time, Bednar was reportedly seeking representation from a lawyer who promised to defend him using “only the finest, artisanally crafted bullsh!t, made by human hands.”