-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StringBuilder.Append(char) doesn't respect maxCapacity #9281
Comments
This is a known issue. The tradeoff at the time was between increased implementation complexity and inefficiency, vs loosing compatibility in a corner case (that you WANT an exception thrown) that has no known real world value. We opted to ignore the problem (Frankly I wanted to deprecate maxCapacity), until such time as we could demonstrate some customer value (a scenario where a user would realistically care about that behavior). Unless we have such a scenario, I would recommend continuing to ignore this issue. |
@vancem, I agree with you that there's little-to-no value in maxCapacity. Do we care then if other StringBuilder.Append overloads start ignoring it as well? e.g. I noticed this while changing methods like StringBuilder.Append(int) to avoid the ToString() calls, and wondering whether I need to add the associated checks for maxCapacity. |
I would have to do some research to be sure, but I believe originally we uniformly ignored maxCapacity (or at least kept it out of all hot paths). I suspect happened however is that someone logged a bug (much like this one), and we 'fixed' it for the case of that bug, and we ended up being non-uniform. (sigh...). That is why ideally we would have a real deprecation story... |
Alternatively, is there any reason we don't just do the max computations when constructing / growing the StringBuilder? We already have to check that we're not walking off the end of the current chunk, so if we just ensured we never allow the chunks to go beyond the max capacity, all such checks would be kept off hot paths. |
I think you will find that because we support 'insert' things get trickier to do efficiently. Given that insert is not a prime scenario you could probably make everything work reasonably well as you point out, but the overarching question is: why create complexity for no value? If there WAS value, doing something like that may make sense, but really we have yet to come up with ANY real-world reason why anyone would want the behavior. |
From my perspective, the complexity is already there, more so because we only partially support it, making it confusing while reading through the code what's a bug, what's not, and when modifying the code, what's necessary, what's not. |
So to get concrete: |
I would say no, don't do the checks. Like I said, until we have SOME indication that there is value in maxCapacity, we should not bother. If we get to the point where we care, we can do it then. (As I recall we finessed this by saying that you were not REQUIRED to throw MaxCapacity, you were just Allowed to (which we chose not to do). thus pretty much anything between doing nothing and being super strict is to spec. |
Okey dokey. |
Repro:
This should throw an exception due to having a maxCapacity of 5 but having a string of length 8 appended, but it successfully outputs
12345678
.Changing it to use
Append(string)
instead ofAppend(char)
:results in an exception as expected:
This repros on both .NET Framework and .NET Core. It seems like either:
a)
Append(char)
should have a check added for maxCapacity, orb) When creating a new chunk (either the initial or a subsequent one), the capacity should be capped to maxCapacity - Length (assuming this is correct, it would be preferable so that we allocate less and so that we can avoid the extra checks in
Append(char)
.cc: @vancem, @AlexGhiondea
The text was updated successfully, but these errors were encountered: