Researchers Discover the Optimal Way to Optimize
Key topics
Researchers have made progress in understanding the optimal way to optimize certain problems, but the work is still largely theoretical and faces significant scalability challenges, sparking discussion about the practical applications and potential future breakthroughs.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
3d
Peak period
10
96-108h
Avg / period
3.5
Based on 14 loaded comments
Key moments
- 01Story posted
Oct 13, 2025 at 6:41 PM EDT
3 months ago
Step 01 - 02First comment
Oct 16, 2025 at 3:18 PM EDT
3d after posting
Step 02 - 03Peak activity
10 comments in 96-108h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 20, 2025 at 10:12 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> The next logical step is to invent a way to scale linearly with the number of constraints. “That is the North Star for all this research,” she said. But it would require a completely new strategy. “We are not at risk of achieving this anytime soon.”
Here "risk" seems odd (or it's a translation/language-nuance mistake).
However, this requires to solve a quadratic 'best direction' problem each time which if IIRC reduces to 'Linear complementarity problem (LCP)' (https://en.wikipedia.org/wiki/Linear_complementarity_problem). The LCP problem scales with the number of active constraints which is always smaller than the dimensionality (N) of the problem. So if you have number of constraints P >> N you are golden.
Note that Dantzig has also contributed to LCP.
Obviously any breakthrough in these basic methods is directly translatable to more efficient learning algorithms for training single layer neural nets (perceptrons). Extending to multi layer NNs is not far off from there...
[1] https://github.com/cvxpy/cvxpy
It is probably just a “git gud” situation. I even re-read Lars Blackmore’s “Lossless Convexification of Nonconvex Control Bound and Pointing Constraints of the Soft Landing Optimal Control Problem” from time to time hoping that i find a way to apply a similar convexification idea to my problems. With all of that I’m not that surprised that convex optimisation is not more widely known.
To me, convex optimization is more the domain of engineering when there are continuous functions and/or stochastic processes involved.
Much of signal processing and digital communication systems are founded around convex optimization because it’s actually a sensible way to concretely answer “was this designed right?”.
One can use basic logic to prove a geometry proof, or the behavior of a distributed algorithm.
But if one wants to prove that a digital filter was designed properly for random/variable inputs, it leads to finding solutions of convex optimization problems (minimization of mean squared error or such).
Of course, whether the right problem is being solved is a different issue. MMSE is just mathematically extremely convenient but not necessarily the most meaningful characterization of behavior.
The reality is that there are some great, mature solvers out there that work well enough for most cases. And while it might be possible to eke out more performance in specific problems, it would be very hard to beat existing solvers in general.
Theoretical developments like this, while interesting on their own, don't really contribute much to day-to-day users of linear programming. A lot of smart people have worked very hard to "optimize the optimizers" from a practical standpoint.
[0] https://arxiv.org/abs/1412.6980
There are a lot of problems like this. Traveling salesman, for example. Exponential in the worst case, but polynomial almost all the time.
Does this indicate progress on P = NP?