A Dumb Introduction to Z3
Posted4 months agoActive4 months ago
asibahi.github.ioTechstory
supportivepositive
Debate
20/100
Z3Constraint SolversSat/smt SolversProgramming
Key topics
Z3
Constraint Solvers
Sat/smt Solvers
Programming
The article provides a gentle introduction to Z3, a constraint solver, and the discussion revolves around its applications, ease of use, and comparisons with other solvers.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
1d
Peak period
13
42-48h
Avg / period
6.5
Comment distribution39 data points
Loading chart...
Based on 39 loaded comments
Key moments
- 01Story posted
Sep 15, 2025 at 7:46 AM EDT
4 months ago
Step 01 - 02First comment
Sep 16, 2025 at 5:23 PM EDT
1d after posting
Step 02 - 03Peak activity
13 comments in 42-48h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 18, 2025 at 3:30 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45248558Type: storyLast synced: 11/20/2025, 5:27:03 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
You can literally watch as the excitement and faith of the execs happens when the issue of explainability arises, as blaming the solver is not sufficient to save their own hides. I've seen it hit a dead end at multiple $bigcos this way.
https://www.hakank.org/
I understand that not giving the lower bounds effectively lets it find an arbitrarily low (generally very negative) number of coins that satisfy the problem (so the minimization would basically go to negative infinity).
But why would it respond with “to get 37 in the lowest number of coins I’ll give you 37 ones”?
Or is this kind of like “if the answer to your minimization problem is negative infinity, the answer is undefined behavior”?
Think about it purely in the set-theoretic sense "what is the minimal set containing 37 elements?" the answer is "the set containing 37 elements."
Either you misunderstood something or please explain more. Note that both the working and broken versions have the same 37 in them.
And the problem statement starts with no coins chosen. It had to actively choose pennies to get that broken result. If you told it about the coins in a different order, it probably would have given a different answer.
It works in a "bottom-up" manner, first giving the minimization goal its lowest possible value and then it tries to "work its way up" to something satisfiable.
So in this case there's something like:
goal = c1 + c2 + c3 minimize goal
(even if you write minimize (c1 + c2 + c3), it's still creating that goal variable internally, so to speak)
It now wants to give goal the lowest possible value it can assign it, even if not satisfiable, to start the process. It looks at the bounds of c1, c2 and c3 to determine this, and they're all -infinity.
The lowest possible value of goal is -infinity, which the optimizer has no idea what to do with, and just basically gives up. When used as an application, Z3 will tell you something's wrong by spitting this out in the terminal:
(+ c1 c2 c3) |-> (* (- 1) oo)
A different optimization algorithm, like one that works "top-down" (find the first satisfiable assignment, then make it better) would actually be able to converge. I think OptiMathSAT would be able to handle it because it also does top-down reasoning for this.
CP solvers and MIP solvers all enforce variable bounds so they don't have this issue.
> When used as an application, Z3 will tell you something's wrong by spitting this out in the terminal: (+ c1 c2 c3) |-> (* (- 1) oo)
This part is what I was looking for. I was surprised that the behavior wasn’t accompanied by any warnings in the article.
Yes, it picked a valid result and gave up.
I got a similar nonsensical result in my run-ins[1] in with SAT solvers too, until I added a lowerbound=0.
The "real life" follow-up question in interviews was how to minimize the total number of intermediate rows for a series of joins within in a cost-based optimizer.
[1] - https://gist.github.com/t3rmin4t0r/44d8e09e17495d1c24908fc0f...
I used it to solve the new NYT game, Pips: https://kerrigan.dev/blog/nyt-pips
Z3 is fine with it--its job is to find any satisfying model. The possible outcomes are "the puzzle is solvable and here's a solution" or "the puzzle isn't solvable."
For example, in https://zayenz.se/blog/post/benchmarking-linkedin-queens/#te... I took a large number of LinkedIn Queens puzzles, and I filtered out the ones that were not well-formed so that they wouldn't mess up the benchmarking and statistics.
You'll be able to solve bigger problems with familiar/legible syntax and even use the code in production.
That’s how I understand it at least! Someone please correct me if I’m off base.
In general, for modeling layers embedded in programming languages, having operator overloading makes the code so much better to work with. Modeling layers where one has to use functions or strings that are evaluated are much harder to work with.
In particular if you've looked at linear algebra libraries that have two or three different multiplication operators (cross, dot, bitwise), if they commit to use functions from the start they end up faar more readable than any attempt to abuse operator overloading.
I prefer;
- DSL (domain specific language, especially for symbolic solving such as mathematica)
- Strings (That's just a DSL hiding in a parse() function, that can be called from a different language. Write the code as if its a DSL, but without pulling in a new tech stack)
- Functions, but preferable in a chain-able style. sym("a").add(1).eq(sym("b")).solve() is better than solve(eq(add(1,a),b))
I have seen operator overloading done right... but I've also cursed loudly at it done badly.
Rust in particular use up most symbols for other things (borrow checker takes alot of them, and "==" is forced to be consistent) so you end up with only "+ - * /" as overload-able. "+" and "-" being the least abused from my experience perhaps makes that a good compromise.
Is it worth having operator overloading in a language or not? That is a different question. People do bad things with every feature, including operator overloading. I'm slightly in favor of it, but I'm not a fan of using it for anything other than either immediate arithmetic or creating expressions trees representing arithmetic.
There is a Python package for MiniZinc, but it is aimed at structuring calls to MiniZinc, not as a modeling layer for MiniZinc.
Personally, I think it is better to start with constraint programming style modeling as it is more flexible, and then move to SMT style models like Z3 if or when it is needed. One of the reasons I think CP is a better initial choice is the focus on higher-level concepts and global constraints.
My understanding is that solvers are like regexes. They can easily get out of hand in runtime complexity. At least this is what I have experienced from iOS's AutoLayout solver
For the iOS AutoLayout, what kind of issues have you seen, and how complex were the problems?