Llms Are Bad Judges. So Use Our Classifier Instead
Posted3 months agoActive3 months ago
papers.ssrn.comTechstory
skepticalmixed
Debate
60/100
AI in LawLlmsClassifier Systems
Key topics
AI in Law
Llms
Classifier Systems
A research paper proposes a classifier system as an alternative to LLMs for judging legal cases, sparking discussion on transparency, trust, and the role of AI in law.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
5h
Peak period
10
Day 1
Avg / period
5.5
Comment distribution11 data points
Loading chart...
Based on 11 loaded comments
Key moments
- 01Story posted
Sep 28, 2025 at 7:08 PM EDT
3 months ago
Step 01 - 02First comment
Sep 29, 2025 at 12:08 AM EDT
5h after posting
Step 02 - 03Peak activity
10 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 10, 2025 at 5:49 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45408880Type: storyLast synced: 11/20/2025, 2:46:44 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Looking at the paper, the classifier definitely does output its reasoning:
"The legal issue at hand is whether the 50/50 royalty split in the 1961 contract binds only pre-existing affiliates or if it also includes affiliates that come into being after the agreement..."
> We built one called Arbitrus. We put it through a mini-Choi test and it mopped the floor with the competition
> Declaration of Interest: [Authors] have financial interests in...Arbitrus.ai. As the title would suggest, the authors are making no effort to obfuscate this fact.
It seems like the answer is neither: on their website, Arbitrus.ai says it's for private arbitration. "Arbitrus is a private court system with an AI judge. Why use the public court system or expensive AAA arbitration to settle your disputes, when you can do it faster, cheaper, and better with Arbitrus?"
Even LLMs can be viewed as classifiers, as the paper (ad?) itself admits.