-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DefExc.widen Excluded range to static size instead of join #502
Conversation
The This of course doesn't change anything w.r.t. the number of solves, but just avoids a useless idempotent join. But it also means that |
Was that how we left it? I.e. inlined all analyzer/src/domains/lattice.ml Line 21 in ef58842
However, searching widen old [^(] , there's:Line 68 in ef58842
analyzer/src/solvers/topDown_term.ml Line 89 in b52b784
Not sure the useless join is relevant. Depends on how confident we are that we (currently and in the future) enforce the join.
Currently we have |
I would leave it as it is, unless there are masisve performance benefits to removing the joins. I think it is hard to ensure that everyone does it all the time in the future. |
So, merge like this or should we think about some example where it's less precise? Sample:
|
I think we didn't change it because it's what was almost always done. And we've definitely had domains which assume that, e.g. intervals: #414. I suppose to be on the safe side, we should change those two solvers to also do the join for widen, especially since there are domains assuming that.
Possibly yes. there isn't any rush to actually avoid those possibly unnecessary joins. Better safe than sorry.
I don't have an example, but was just thinking how this is a bit like the interval widening: the type's range is the most imprecise thing corresponding to infinity and this would widen to that. The analogous narrowing would be such that it may improve on the widening results, i.e. come back down, but only from the type's range. Although I suppose this doesn't really matter. These ranges (especially since they're rounded to integer types) form only very short ascending chains anyway, like the ones shown in the PR description.
As I said, it has already been an implicit assumption for a very long time for some domains, so this should be enforced anyway and hence could also be assumed by all other domains. Not enforcing it is how issues like #414 happen. |
I'll actually do an sv-comp run because the case where this may matter very much is for globals as we never do narrowing for them. If there is a difference there, we might want to have an option to toggle this behavior. Actually, we should probably always have that toggle (see #459). |
Was there some noticeable difference? |
When comparing between e75f7d7 and cb439e0 we loose a total of 12 points, and the runtimes of some of the tests go from a few seconds to timing out after 15min. Conversely, for some tests runtimes improve a bit. Also, for some of those, what is very strange is that we have a quite low number of variables that are all uncovered very quickly and then the number of evals keeps going up without any new variables being discovered: E.g. for
and then after 15min
Before, it terminated in 3s. All the details are here: Of course there now is #495 in between, so some of the changes may be due to that, but seems a bit unlikely that this would cause an issue with evals going up and the number of vars remaining constant. |
Pretty wide graph 😄 The eval thing is interesting. Sounds like it could be address-related, but then you'd expect the context/vars to go up as well. |
Also getting a timeout on |
This also seems to be the reason for this OpenSSL nontermination: goblint/bench#7 (comment). Now that we have an option for this, I'm going to just toggle it on |
We can just not use it, but it's somehow indicative of some other issue, no? |
Indeed, I don't see anything conceptually wrong with quicker widening, so there must be some issue worth looking into. Probably easiest to minimize one of the SV-COMP programs as opposed to knot_comb or openssl. |
I already started on knot before lunch. I have it down to 400 lines and will continue with it this afternoon. |
This example is based on knot: 3a6491a, and I've tried manually to simplify it, but if you remove any of the structs or pointers, the problem disappears. |
👍 The example I have thus far is single threaded and has two functions only, trying to narrow it down to only one at the moment. Maybe having two small examples is also insightful. |
Sorry, I forgot about that already minimized example. If it happens just with structs and pointers, then it might have something to do with widening of indices in offsets of addresses. Maybe there's another point where |
int main(int argc , char **argv )
{
char buf[512];
int top;
char *start ;
while (1) {
start = & buf[top];
}
return 0;
} Here is another minimal example. What is interesting is that the If I change widen such that However, it is very strange that we have non-termination otherwise, because the widen should be able to arbitrarily go to larger values without risking non-termination (there is no narrowing for |
The widening point removal is far from new I think. Thanks for minimizing the problem down to such small program. I traced it too and beyond what you mention, this happens:
The problem is the last line where EDIT: I think I found the culprit, opening a PR in a moment. |
But that is not even so wrong, because it is called from a |
Fix for Adress Domain widen: #534 Maybe post the bench results here then. Hopefully better than the results above #502 (comment). The initial motivation for the change was to have shorter traces for small programs, but for big loops with many function calls inside (that's why I tried |
Originally posted by @vesalvojdani in #534 (review) |
In
DefExc
we currently havelet widen = join
.As I see it, this causes extra iterations for little to no gain.
Consider this loop:
The excluded range via
join
already does something like a threshold widening by going through the intermediate int sizes (i++
overflows range) before reaching the static type:Instead of doing this, one can jump to the static type directly for
widen
and only keep this stepwise behavior forjoin
.from 15 calls to
solve
without this change.I tried to find an example where you would get a less precise exclusion range.
But here for
i
we already have[-31,31]
withwiden = join
:I guess you can construct some example where you widen some value in a loop that is not changed each iteration but only joined from some branches.
However, the saved iterations for every int loop probably outweigh this loss.