Replies: 1 comment
-
z3 detects that the benchmark is QF_NIA, quantifier-free non-linear integer arithmetic. For this purpose it uses a tactic that first attempts to solve for feasibility by translation into bounded bit-vector arithmetic. If you add bounds to variables explicitly, it will use those bounds. The larger the bounds, the more bits are used in the encoding. This leads to larger SAT problems. Bit-vector multiplier solving with SAT is inefficient, but it is the best option in context of z3: the general purpose solver is even worse, not able to find solutions as it stands. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Consider one of the STC constraints from the benchmark set:
This takes ~80 seconds to solve on my machine (sat assignment is 5, 10, -7). But if we add constraints on the size of the variables:
it actually takes much longer to solve (~200 seconds on my machine). Very small bounds (like 11 > x > -11) of course make the solving almost instant. Anecdotal experiments suggest that there's some "equilibrium" in bound size (for this one around 900) where the bounded version takes the same amount of time as the unbounded one. This type of behavior, though with different specific bounds and running times, seems to be consistent across many STC constraints.
Does anyone have some insight into why bounding the values could slow down the constraint compared to the completely unbouned version? Which different tactic applications could affect this? Would it be possible to predict this type of behavior and revert to solving the unbounded constraint to save time? Thank you for any help!
Beta Was this translation helpful? Give feedback.
All reactions