Investment Without Optimization: LLM-as-a-Judge Tournaments and Evolution
Posted29 days agoActive29 days ago
papers.ssrn.comResearchstory
informativeneutral
Debate
20/100
AI-Powered SupportFinanceEvolutionary Algorithms
Key topics
AI-Powered Support
Finance
Evolutionary Algorithms
Discussion Activity
Light discussionFirst comment
N/A
Peak period
2
0-1h
Avg / period
2
Key moments
- 01Story posted
Dec 4, 2025 at 4:32 PM EST
29 days ago
Step 01 - 02First comment
Dec 4, 2025 at 4:32 PM EST
0s after posting
Step 02 - 03Peak activity
2 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 4, 2025 at 4:37 PM EST
29 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46153386Type: storyLast synced: 12/4/2025, 9:45:12 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The first component is a correlation-aware selection method that repurposes a hierarchical clustering dendrogram as a tournament bracket. At each internal node, the LLM allocates selection slots between clusters and performs structured eliminations within correlation regimes.
The second component is a portfolio evolution loop that contains no objective function, expected returns, covariance matrices, or solvers. Instead, the model compares variants using a qualitative rubric (business quality, durability, thematic alignment, drawdown resilience, diversification) and accepts improvements through iterative reasoning.
Both mechanisms are fully text-explainable: every elimination, selection, and mutation is auditable.
Two components: • A correlation tree is repurposed as a tournament bracket. At each node, the LLM allocates “selection slots” across branches and performs eliminations inside correlation regimes. • A qualitative evolution loop compares portfolio variants using a rubric (business quality, durability, diversification, resilience) and accepts improvements iteratively — without any explicit optimization objective.
The interesting aspect is not the performance but the explainability: every elimination and mutation step is text-auditable.
Curious whether others have experimented with LLM-based reasoning loops as substitutes for classical optimization in areas outside finance.