I Spent Three Nights Solving Listen Labs Berghain Challenge (and Got #16)
Posted3 months agoActive3 months ago
kuber.studioTechstory
calmpositive
Debate
20/100
OptimizationPuzzle-SolvingAlgorithm Design
Key topics
Optimization
Puzzle-Solving
Algorithm Design
The author shares their experience solving the Listen Labs Berghain Challenge, a complex optimization problem, and discusses their approach with other participants in the comments.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
2d
Peak period
10
72-84h
Avg / period
5.8
Comment distribution23 data points
Loading chart...
Based on 23 loaded comments
Key moments
- 01Story posted
Sep 21, 2025 at 10:11 AM EDT
3 months ago
Step 01 - 02First comment
Sep 23, 2025 at 4:19 AM EDT
2d after posting
Step 02 - 03Peak activity
10 comments in 72-84h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 25, 2025 at 5:40 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45322906Type: storyLast synced: 11/20/2025, 1:23:53 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
A quick sanity check is in Scenario 2, you needed 300 creative people each with a ~6.2% chance of showing up. The odds of getting a sequence of people where that's even possible for the first place score (2906 rejections + 1000 accepts = 3906 total people) is on the order of 1 in 10000, and that's without even factoring in the other constraints.
That way people can test offline with random sequences, but the leaderboard runs have the same seed for everyone. Maybe I'm missing something obvious, but I think this would have lessened the impact of luck.
Sites like Kaggle usually get around this problem by running contestant code in a containerized environment server side, but even then you can get clever with tricks to leak info.
Also did you optimize for the best case in any way vs expected cases?
My other trick was to only build the full DP table for the latter half of the game (i.e. when all the constraints are at least 50% satisfied) which across 4 dimensions reduces the size by a factor of 16. For the beginning half of the game I combined Berlin and techno into a single parameter, which technically isn't perfect but doesn’t matter too much in the early game. I wrote up my approach here if you want more details: https://chriskw.xyz/2025/09/16/Berghain/
Re: optimizing for best case vs expected case, I thought about that but in simulations my strategy mostly performed the same as a "perfect knowledge" strategy where you could see all of the people in line ahead of time and retroactively accept people. When it under performed it was usually because some miraculous string of people showed up near the end, but betting on that happening seemed like it would do more harm than good, i.e. it would throw away more best case scenarios than it would salvage.
Also, I do remember seeing you on the leaderboard, cool stuff!!!
I ran simulations with perfect information and found the lower bound for scores. Scenario 2 was mean 3743 rejections with 265 std deviation. This is the curve formed from simulated data and a strategy that had with perfect information, i.e. you could build the best possible strategy after knowing the random assignments.
So winners had scores that I could not even theoretically achieve unless I could see 1000s of scenarios.
So I ran my code locally and was happy that my code was always just a few rejections off of optimal and called that a private success.
I remembered that I have a free VPN subscription and put it to good use. When requests started to fail, a script changed my IP to another random country.
As for the solution, I used an approach similar to primal-dual optimization + some manual tweaks that made sure it capitalized when the random gods provided. To make sure I'm close to the optimal, I saved the streams and run an offline solver.
What is the probability of a person with both the attributes "looks like they belong to berghain" and "can solve an obscure live optimisation challenge" ?
As someone who goes there often, I can say it's surprisingly high
As a person who has been there, you'd be surprised. Strict door policy is sort of a soft intelligence test, and the music there is so insanely good that it attracts highly creative, highly open folks.
You can make a hiring puzzle out of that.
We have an AI customer interviewing platform our customers ask us things like: “I want to talk to 200 people, at least 100 who are ChatGPT Pro users, 75 who use Gemini weekly, and 50 from each region of the US.”
We don’t know those attributes upfront, so we have to ask participants screening questions and decide in real time whether to move forward.
Of course, it's a bit different as we optimize for the average case, not the best case, and we don't know the distributions (but can estimate with LLMs!).
really like whoever's idea was to make this a challenge like this was, viral potential and being actually useful :)
I wonder if this exact solve tried setting "more rejections than X" as failure, so that you can get better best case for the cost of dropped worst cases