-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BoundedArray segfaults for items larger than stack #19954
Comments
What you're doing here is equivalent to declaring a type which is more than 12,800,000 bytes large. Even if you reserve that much space on the heap, types are assumed to be small. You can't access a value so large that it wouldn't even fit on the stack. |
Not possible without an implicit heap allocation which is obviously a no-go for Zig. The ideal solution would be to force a compilation error when the compiler is required to put very large values on the stack. This issue is not unique to |
The actual issue (/ design incompatible with this large-item usage) lies within the method implementations, for example in const old_item = self.get(i); //tells the compiler to save a copy on the stack
self.set(i, self.pop()); //pop returns the last entry on the stack, set overwrites index i completely with this value
return old_item; //returns the previously-saved copy It would technically be possible:
While I think that introducing such transformations in the long-term at some point would be nice, I also think it's likely for the suggestion to be rejected by maintainers for language simplicity (more directly mapping the code you write to the assembly generated). Either way, I do think the compiler should produce a compile error once it detects too high stack usage, i.e. not produce an executable that segfaults because of this issue at run time. |
The maximum stack size could only be checked at runtime. It can be set per thread, stacks can also be switched (very common for coroutines), stacks can also grow... So the compiler could raise an error based on some arbitrary limit, just like it could refuse to compile a loop if it could spin for too many iterations. But arbitrary limits are not great. They don't mirror the reality, nor crazy things developers may legitimately do. The actual issue you are raising here is that there are cases where a value is unused, yet its computation is not optimized out in debug mode (your code example doesn't segfault in I guess trying to do that kind of optimization in debug mode, that intentionally doesn't do optimizations, would slow down compilation with little benefits. The stack overflow you are getting in debug mode highlights a design error in the data structures the code is using, so it's actually useful. |
Zig Version
0.12.0
Steps to Reproduce and Observed Behavior
Even when dealing with a
std.BoundedArray
that exists on the heap, functions to remove items from the array can result in a segmentation fault due to stack overflow, since each of these functions attempt to fetch or return the old value onto the stack.Expected Behavior
Ideally, manipulation of heap data stays on the heap.
At a deeper level, should the compiler be smart enough to not try to pull a struct onto the stack when it is of a known compile-time size to be too large?
The text was updated successfully, but these errors were encountered: