-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification requested for Expects/Ensures contract violation behavior #1561
Comments
Editors call: We're in a halfway state where we had hoped Standard C++ would add contracts but the standard has not yet done so. We definitely do want However, I.6 also recommends |
(Edit: you asked about a continuation mode, and I realize that throwing isn't continuation, but the throwing mode is perhaps the most prominent example of an alternative violation behavior, so I hope discussing it here is also appropriate.) Let me first state the obvious: I also use throw-on-contract-violation when I write Python bindings for my functions for interactive use. A REPL environment blurs the distinction between programmer and user that is used to motivate fail-fast on precondition violation; having the REPL process terminate when passing wrong arguments is rather inconvenient.
Giving Hypothesis: I may want to use throw-on-contract-violation for the precondition checks in my own code, but I never want throw-on-contract-violation for 3rd-party code. If we find that the hypothesis holds, perhaps it makes more sense to have #include <gsl/gsl_assert>
namespace my_lib {
#define GSL_CONTRACT_SCOPE MY_LIB
inline void my_func(int i)
{
Expects(i >= 0); // can be configured to throw by defining `MY_LIB_THROW_ON_CONTRACT_VIOLATION`
...
}
#undef GSL_CONTRACT_SCOPE
} // namespace my_lib (Cf. gsl-lite/gsl-lite#213 which proposes scoped contract checks for gsl-lite.) |
libstdc++'s std::span bounds check is not unconditionally enabled. It's actually disabled by default and only enabled if _GLIBCXX_ASSERTIONS is defined. It unconditionally fail-fast on violation, though. https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=libstdc%2B%2B-v3/include/std/span;h=f658adb04cf04259035cc95fe17ffbcdea0fbf45;hb=HEAD#l284 libc++'s std::span bounds check is not unconditionally enabled. It's actually disabled by default and only enabled if _LIBCPP_DEBUG is defined. Until 8.0.0 it had _LIBCPP_DEBUG_USE_EXCEPTIONS to let it throw exceptions on contract violation, but that doesn't exist any more. A non returning custom handler can still be set via the std::__libcpp_debug_function pointer (https://github.com/llvm/llvm-project/blob/master/libcxx/include/span#L304). The Microsoft STL std::span enforcement seems to be configurable via _CONTAINER_DEBUG_LEVEL (https://github.com/microsoft/STL/blob/master/stl/inc/span#L604) http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0122r7.pdf says: "All accesses to the data encapsulated by a span are conceptually range-checked to ensure they remain within the bounds of the span. What actually happens as the result of a failure to meet span’s bounds-safety constraints at runtime is undefined behavior. However, it should be considered effectively fatal to a program’s ability to continue reliable execution. This is a critical aspect of span’s design, and allows users to rely on the guarantee that as long as a sequence is accessed via a correctly initialized span, then its bounds cannot be overrun. As an example, in the current reference implementation, violating a range-check results by default in a call to terminate() but can also be configured via build-time mechanisms to continue execution (albeit with undefined behavior from that point on)." So I agree giving gsl::span a special treatment is strange. I don't have a strong argument for throwing on a precondition failure. Arguably Catch2 should simply support death tests, and AFAIK the author wants to add that support (from 2015: catchorg/Catch2#553 (comment)). But I don't think the specification should mandate what GSL implementations do regarding enforcement. |
We're developing a tool that among other things does APM (application performance monitoring). Which means it's running inside arbitrary customer processes. One thing we cannot do there is I think that defensive programming should be avoided as much as possible, but I'm afraid in our case it's not possible. Having some missing or corrupted traces is preferable by far to bringing down the application. Of course letting the execution continue beyond the failed assertion is also not a good option because it would mean risking corrupting arbitrary memory. So throwing was the best compromise for us - for release builds. There's no problem with terminating in debug builds. BTW this is not only a problem with ps: I think it's implied by what I wrote, but just to clear: having a configurable |
@pgroke-dt: How do you actually verify that your tool doesn't bring down the process in some other way (e.g. by dereferencing a nullptr or letting an uncaught exception buble up to a noexcept function)? I assume by diligent testing, code reviews and static analysis tools. Why can't those methods not be used to ensure that there are no contract violations that could trigger the terminate? I mean, if you can't use the gsl then your can't use it - its not like there aren't other implementations out there or that it would pe particularly hard to implement the tools yourself. But an explicit call to std::terminate in a contract check is really not the only way a bug in your application could terminate a process or cause much worse damage. |
Personally I don't like that the question of how Ensures and Expects work gets always mingled with the behavior of gsl::span. If it is a desired and specified property of |
Yes.
You usually have far more places where something could throw than "noexcept" boundaries. At least that's the case in our code. So making sure no exception propagates to a "noexcept" boundary is a lot easier than making sure every single line of code is 100% safe regarding contract violations. Especially error handling/corner cases are hard to cover in unit-tests. And while static analysis tools are good, they all produce false negatives and some of the unfortunately also produce a lot of false positives. Both can lead to mistakes. When it comes to (not) dereferencing null pointers, we use our own functions/macros for checking those. Not everywhere, but where it makes sense. And as I already wrote, those throw exceptions. Re. corrupting the production data of the host application: if we were to simply let execution continue after a failed assert, that would potentially be a big problem. We don't do that, we throw instead. I also want to say I see less of a problem with |
I use not_null for function signatures to indicate the contract of the function (this parameter must not be null, the return value will not be null). However somewhere along the line I'll have a technically potentially null pointer that I'm pretty sure isn't null that I have to convert to not_null (implicitly or explicitly). I do so with the compiler flag set that makes it not throw. This is because my error reporting mechanism, which is intended to give useful diagnostics when the application terminates unexpectedly, relies on catching exceptions at the top level. If I dereference null I get a useful diagnostic. If I convert null to not_null I get a slightly more readable diagnostic. You could debate whether I should be relying on an exception from null deference since it's technically undefined behavior so technically it may not do it consistently but practically it works for my compiler right now. Assuming that I am unable to overhaul my error reporting mechanism (though I think this is probably the ideal solution, pragmatism aside) and assuming that I want to continue to get useful diagnostics (I do), when I dereference null then if gsl doesn't support not_null throwing then it doesn't support my codebase. As such I'm in an awkward situation where I'm either stuck on this version of gsl forever or (as @MikeGitb suggests) I have to write my own version of gsl::not_null. In the worst case this might mean writing my own version of gsl if I want to avoid naming conflicts (I could call mine something different but I may want to use tooling that relies on the name). This second option would violate http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#p13-use-support-libraries-as-appropriate in my opinion. Technically it doesn't because technically the library doesn't do what I need but I think the microsoft gsl implementation (and any other distributed gsl implementation) should be designed to be the one true gsl because having 10 libraries that all do essentially the same thing is bad as is having 1000 private implementations. Naturally you have to draw the line somewhere but it feels like removing the ability to throw from not_null is too draconian. Putting aside my own anecdote, two pieces of evidence I raise are:
|
I wholehartedly agree that that implicit conversion is problem. I think the original Idea was to have a static analysis tool that would stop you whenever it could not prove that gsl::not_null was constructed from a non-null ptr, in which vase you'd be forced to put a manual check in front of it.
However, in practice thats probably rarely how this is being used. |
@MikeGitb I don't know what the original plan was. I only saw a discussion about making the ctor explicit - and the decision was to keep the implicit With our product it's similar: We're already using GSL and would like to update. Mostly because of the now-unsigned size type in We currently have GSL configured to throw on contract violation. Which is kind-of OK with the implicit |
I am currently stuck on GSL 2.x because 3.0 violates the zero-overhead principle and does not cater for high-performance applications. A contract violation should not happen in bug free code. Yet the user is expected to pay for bugs they don't write with a branch on every contract check. I've measured the difference and GSL 3.0 introduces a noticeable code bloat as these asserts can no longer be compiled out with the @shaneasd I would consider a fork of GSL which reintroduces |
@johnmcfarlane I think we may be arguing slightly different things. I can definitely see a use case for a contract check which triggers in debug but not in release (or whatever other @defined dictates which behavior you want). I think bug free code is probably extremely rare so I don't think it should be the only use case catered to but I could see that in high performance code it's an assumption you have to make to maximize performance. When you say the user are you referring to the end user of the software or the user of a library? I guess in either case their ideal would be that the software works without any bugs and thus requires no contract checks but the code I work with sadly does not achieve this (using cpp core guidelines and gsl is one way to try and stop it getting worse). As such for my use case error handling is an important feature. With this in mind it seems like there are at least three kinds of users:
Naturally a fork is an option but I don't need to support high performance applications in my fork and @johnmcfarlane doesn't need to support exceptions in his. So either we each go our separate ways which is a bad option or we create an alternative and "better" gsl which also seems like a bad option to me. |
The library user.
(By error handling, I assume you mean bug catching here.) I would caution against drawing a sharp distinction between 1) disabling checks in bug-free code and 2) enabling checks in buggy code. All code has bugs, none are welcome and trapping helps everybody eliminate them. But enabling contract checks doesn't somehow trap all the bugs. And the run-time cost is noticeable. Here's an example of the efficiency hit you see moving from 2.1 to 3.0. The example's contrived, but it's representative of actual penalties you can expect to see in code using GSL. I saw my modest program grow by a KB or two when I switched up. To be clear: I don't want to stop anyone from choosing to pay this cost.
Given GSL 2.1 supported both our use cases, I don't see why separate forks would be necessary. And it's not something I want to do either. But you did suggest dropping GSL and I couldn't resist replying as I've been thinking the same thing recently! I would suggest to track GSL but with |
Sounds like we're in agreement about the problem and possible solutions. Ideally I'd like the editors to state a definitive position before considering something so drastic as forking though. Currently I'm satisfied with the version I'm using (2.0.0 by the look of it) but eventually I'd want an upgrade path. |
I don't have anything to bring to discussion regarding span[idx] // safe
span.unsafe_at(idx) // unsafe On the other hand, one can just do the following for performance optimizations: *(span.data() + idx) Regarding throwing, is there a reason why |
I agree with @elshize for clearly-marked "unsafe" members for |
@elshize @daviddetweiler a guiding design principle behind C++ is that no programmer should pay (in terms of run-time cost) for a feature that they do not use. When used correctly, there is nothing unsafe about unchecked Additionally, throwing an exception in response to a bug is just plain wrong. It's unfortunate that the GSL library runs with this poor design decision but lets not make the situation any worse or try to normalise it. If you think there's a bug in your code which might result in out-of-bounds access, fix your code. If you're not sure, use static analysers, dynamic analysers, fuzzers and -- most importantly -- write tests. If you think that an index might be out of range in legitimate, but exceptional, circumstances, use But please don't make others pay for the bugs they didn't write. |
@johnmcfarlane As it turns out, my comment on MSVC was a bit out-of-date, and it's now smarter about it. |
@daviddetweiler firstly we should agree that we're focussing on bugs here: run-time conditions which absolutely cannot occur in correct code and for which the appropriate course of action is to fix the code. Given that, a run-time check is a very useful developer tool for testing the correctness of the code. However, it's simply not possible for the optimiser to consistently remove such checks. Therefore they cannot be mandatory in software which observes the zero-overhead principle.
I'm referring to circumstances in which throwing of an exception would be appropriate. That excludes circumstances where a bug has been detected at run-time. |
The designers/authors/maintainers have decided that preventing out of bounds access should be a fundamental property of the
And I think that is a perfectly valid design choice for a library that wants to make programming in c++ safe (r). If the design tradeoffs don't fit a particular usecase then it might be best to simply use a different type (e.g. I said it before, my only real gripe about it is that the contract mechanism is used to implement (not just verify) part of the API specification. If gsl::span guarantees termination on out of bounds access, then it doesn't "Expect" or "Require" a valid index but instead has a wide contract that has defined behavior for any index, which is the very definition of not having a precondition. That aside, add my +1 for |
I agree with @MikeGitb, it's clear to me that the goal is to have safe access by default.
@johnmcfarlane From your comments, I figure it will be hard to convince you, but my view is that unsafe operations should be more verbose, one should be forced to do a little extra work and therefore be very mindful before doing something potentially unsafe. Mind you, this is all under assumption that @johnmcfarlane has a point that most of the random access operations by their nature won't be able to be optimized out, and that's the point. However, there are some obvious cases when you could do none or fewer checks, mostly for iterating. The most basic case is just iterating over the entire span: for (auto elem: span) { ... } This should be no more expensive than a regular array, since it will desugar to iterators. If you need to only iterate over a subspan, then it should be done with: for (auto elem: span.subspan(a, b)) { ... } You pay only for one check, which makes sense because you can't be sure For more complicated access, when you know that you're in the bounds but don't do a sequential access, you should probably use the for (auto idx: permutation) {
// safe because permutation elements are within the bounds
debug_assert(idx < span.size()); // maybe even add assert?
auto elem = span.unsafe_at(idx);
}
@MikeGitb I think this is the main issue of interpreting what contracts are, and people don't seem to agree. I'm of the opinion that a program should terminate on out of bound error. For cases when we can't allow the program to crash, and we somehow have a way to recover from that, we could use a throwing version, I think. |
I don't have a problem with program termination if you try to access out of bounds. What I'm saying is that if that is the guaranteed behavior this should be implemented via I don't actually think there is a big question about what a contract is. e.g. the terms wide an narrow contracts are well established. The big discussion is around what should happen if a contract gets violated and if they should always be checked in the first place, which is precisely why I would not couple a behavior that is part of the API specification to a contract checking mechanism that has been subject to change/user configurable and is likely to change in the future. |
This is actually what I was referring to, wasn't very clear. And I do agree with your take on the out-of-bounds check for span, we should just do a check instead of executing |
It is the express design and intent of |
That would be fine except that is means that Also, it means that |
Is it reasonable to assume that gsl::span and std::span do the same thing? My first instinct was yes. However it occurs to me that there's a lot of precedent for the stl taking inspiration from utility libraries like gsl and there is also precedent (I'm thinking of boost::optional) for not being faithful to the original implementation (preferring something better, by some metric, over something consistent). As such I think perhaps it is unreasonable to assume two libraries will apply the same meaning to the same name. In particular the solution in this case seems like it would be to rename one rather than change the meaning of one and renaming would be a pretty big breaking change. |
If the option to throw disappear, I will have to stop recommending using
that version for the reason mentioned below.
…On 9/5/2020 12:13 PM, jheaff1 wrote:
I too am concerned about GSL 3.0.0 dropping the capability of throwing
on contract violation. For safety-critical systems, terminating is
simply not an option. It doesn’t matter how well a developer thinks
they have tested their code, there is never 100% certainty that bugs
exist in production software.
Therefore, being able to throw on a contract violation provides a
safety net, allowing the application to stop critical processes and
die gracefully or retry etc. I do not trust myself nor my fellow
developers to write completely bug-free code so that we are 100% sure
that contract violations can not occur.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#1561 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACTNGNEI2A23PGFE2PQBD73SEIFNBANCNFSM4KNOSNDQ>.
|
I had deleted my comment (referenced by @BjarneStroustrup) as after I posted it I realised that one could use a custom std::terminate handler to gracefully stop processes in an application before then calling std::exit or std::abort. However, after further reflection I think that a developer should indeed be able to choose how contract violations are handled (terminate vs throw). Usage of a custom std::terminate handler bypasses the invocation of destructors that have been written to stop critical processes gracefully. Consider a simple calculator app. An Add method may have preconditions such that the addition of the two inputs should not exceed the numerical limit of a 64 bit signed integer. If the user were to add two numbers which violate the contract, should the calculator app terminate? I wouldn’t say so. I would argue that it should tell the user that they entered numbers that are too large. Enforcing termination on contract violation seems too drastic to me; perhaps I misunderstand the intended usage of contracts. |
The calculator example seems like a very bad use of preconditions. The contract should allow larger numbers, check for overflow, and report an error. You say yourself that you want the user to be informed, so then it's not a precondition. |
Really? I have often seen in online tutorials/papers a square root method used as an example to demonstrate contracts, whereby a precondition is that the input is more than 0. Again, consider the calculator example. If the user input -1 into a square root function, the app shouldn’t just terminate. As far as the sqrt function is concerned, it should just state its preconditions. It is the choice the higher level application developer to decide whether contract violations should cause termination or not, which highlights my concerns around GSL only supporting termination on contract violation. |
If the function to calculate the result has such a precondition, the app needs to validate the inputs before calling that function. If you want to handle bad input and tell the user there's a problem, you need to write code to do that. Informing the user is a defined feature of your program. Precondition violations are undefined behaviour. Relying on a specific response to undefined behaviour to implement a guaranteed feature of your program is foolish. You would be building defined behaviour on top of undefined behaviour.
No, it is the responsibility of the higher level application developer to meet the contracts of the functions they are using. If that requires defensive checks to ensure you don't pass zero to a sqrt function, you need to do those checks, not just call the function anyway and expect it to tell you there was a precondition violation. |
The main problem I have with turning contract violations into exceptions via a config macro is that introducing new exceptions into a 3rd party library is usually not something you want to do after the fact. Suddenly all kind of functions can no longer be noexcept, which in turn has effects on performance, control flow, test coverage, the result of meta functions (e.g. is_nothrow_copy_constructible) and in c++17 even the type of the function. What's making this worse is that this is almost always a global decision, unless individual libs/modules take great care to not put any of the contract checks into a header file (directly or indirectly) - and that isn't even considering the ODR violations you might or might not get away with in practice. If 100% of the c++ code that uses the gsl is under my control, then I see the value in being able to tailor how contract violations are supposed to be reported to the needs of my application. However, in that case, I'd probably just fork the gsl library and rewrite the macros to suite my purpose anyway. Finally, as jwakely wrote: A precondition violation is a bug in your program. A contract check is not a substitute for validating end-user input (would you use assert in a product shipping to the end-user to validate user input?) but a defense against a programming error. You might still not want to terminate your program everytime you detect a bug at runtime, but in a bug-free program, there should be no way whatsoever that a precondition gets violated no matter the user input (the user in those case may also be another 3rd party program sending some malformed data via IPC, network or the filesystem) |
P.S. I am not stating my position on terminate vs throw (even though I do have an opinion), I am simply saying that the calculator examples are very bad examples for this topic, maybe based on a misunderstanding of what preconditions are. "I need to handle bad input, so I want to be able to violate preconditions" is extremely confused thinking. |
I see, thank you for the clarification. I must have misunderstood the intention of contracts. If an application was so concerned about stopping critical processes upon std::terminate being invoked, it could do so in a custom std::terminate handler. Therefore, as per your point about violations being bugs rather than allowable errors, it makes sense for violations to call std::terminate rather than throw. There is also @johnmcfarlane’s point regarding performance, which is also a concern for me. The GSL library can no longer be configured to compile-out assertions in production builds. |
I think I already wrote it somewhere: I think the problem here is that a guaranteed check (gsl::span not allowing out of bounds access) is implemented via contract checking macros. If the specification for Now I can agree with the design decision that |
Yes. Preconditions are not a substitute for input validation. Rather, a
precondition can be a check that you did the input validation correctly.
…On 9/6/2020 12:58 PM, Jonathan Wakely wrote:
The calculator example seems like a very bad use of preconditions. The
contract should allow larger numbers, check for overflow, and report
an error. You say yourself that you want the user to be informed, so
then it's not a precondition.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1561 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACTNGNEQUAB7X2TBQTXGUBDSENTMFANCNFSM4KNOSNDQ>.
|
I fully agree with @MikeGitb and @BjarneStroustrup, if the intent is to have safe access with runtime bound check, then Of course, one common thing in both cases (span and contracts) is that if we decide to unconditionally always terminate in either case, there's that problem of certain systems that really need to stay up no matter what. As for And as for the contracts, does anyone know if there's anything close to consensus as to what the standard contracts should be? As far as I understand, the reason they were pulled out of C++20 was that there was no agreement on whether it's enforcement should be done always or only as a debug assert, is that right? Or am I remembering incorrectly? I definitely heard both opinions but I'm not sure if that's in fact the topic of main contention. |
There are a very wide spectrum of opinions on what contracts are for and
how the notion is best supported. You can even find contradictory
opinions from a single author as opinions change over decades.
That said, I'd like to retract my somewhat bombastic statement about
preconditions and exceptions. The agreement among the CG editors is to
wait to see what the contracts study group comes up with before doing a
thorough revision of our treatment of contracts. Also, I should not post
in a hurry while on vacation. Sorry.
Anyway, preconditions are not a good way to do input validation.
…On 9/6/2020 1:22 PM, jheaff1 wrote:
Really? I have often seen in online tutorials/papers a square root
method used as an example to demonstrate contracts, whereby a
precondition is that the input is more than 0.
Again, consider the calculator example. If the user input -1 into a
square root function, the app shouldn’t just terminate.
As far as the sqrt function is concerned, it should just state its
preconditions. It is the choice the higher level application developer
to decide whether contract violations should cause termination or not,
which highlights my concerns around GSL only supporting termination on
contract violation.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1561 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACTNGNABEF4MWC2UY7LC4XDSENWINANCNFSM4KNOSNDQ>.
|
I'm a little confused. It seems like there's an inconsistency between the view expressed in http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#Ri-expects and the view being presented here (particularly by @BjarneStroustrup ). That rule, as I read it, says that anything that is a precondition should be expressed as such by using Expects() but it sounds like there is not a consensus among the editors as to what Expects should actually do. There also seems to be a lack of clarity as to what exactly constitutes a precondition in some of the more obscure edge cases. However it seems like there is useful advice to offer here as it seems like everyone agrees that some things (like input validation) should definitely not be treated as input validation. Should the rule be less prescriptive and simply say something like "Clearly differentiate between preconditions of a method and the logic of the method"?
Possibly this could be rolled into http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#i5-state-preconditions-if-any Naturally the same applies to http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#i8-prefer-ensures-for-expressing-postconditions Of course this doesn't address the question of what gsl::span should do on contract violation. |
I would agree https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#Ri-expects is too prescriptive. Mainly because I don't think GSL's Expect is specially good. The GSL Expects suffers from:
|
@reddwarf69 I agree, but the right way to do contracts in C++ is still an open question. Once there is more clarity I would expect GSL to be updated. That said, I agree that a macro without a "namespace" prefix that isn't all UPPERCASE is ghastly. |
I come to this topic acting as the implementer of a library that others will use. It's not my decision to make about whether termination is appropriate or not. In one case I know of it's not only not appropriate, it's against expressed policy for a library to terminate. My need here to allow the user of the library to make the decision about appropriate run-time behavior. Much of the discussion here assumes that the author of the code also has authority to decide what the policy is. That's simply not always the case for library authors. To be explicit, the fixed behavior of GSL 3.0.0 in this regard makes it unsuitable for use as-is in our library. It does not allow configuration for termination behavior. There's an original muddiness in thinking about implementations of program correctness that seems to be at play here. It was expressed well here:
Preconditions were originally conceived of as analytic tool for using formal logic to reason about the behavior of programs. The formal logic was not just predicate logic, but included first-order logic quantifiers (for-all, there-exists) and sometimes other non-first-order quantifiers (number-of). Somewhere along the way, many got in the habit of forgetting that quantifiers could be in precondition expressions. I can only surmise the causes, but I have a sense that it's because predicates can be readily be evaluated in code and quantified statements cannot. Preconditions that can be evaluated are only a subset of preconditions that can be expressed. It's a rather profound conceptual mistake to conflate the two. And that mistake is at the heart of CCG I.6. You simply can't express the full variety of preconditions used in analysis in an expression that evaluates to As currently stated, I consider CCG I.6 to be Bad Advice. If I "prefer Expects() for expressing preconditions" then there's preconditions that I won't be expressing at all because they don't fit into the language. It's possible to salvage I.6, though, by qualifying its utility: "Prefer Expects() to specify which preconditions might be validated at run-time". With this change, it would be clear that (1) whether predicates should be evaluated at all, and (2) what to do about them, that both should have configurable behavior according to an author's situation. |
Just wanted to chime in. We've used For example, the presence of the Is there anything I can do to circumvent the |
Hi @dingari. Unless the situation has changed significantly in the past 2 years then yes, you either need to:
If you are able (if your toolchain supports it and the effort involved is not too great), I'd personally recommend
though you may wish to read up on those flags to understand their full implications. They may trap many other bugs during testing, but they won't just add bloat to |
I've read the informative replies. Thank you @shaneasd for linking to the issue I raised. As others have said, termination on a contract violation does feel unsafe. Adding custom handler so that std::terminate() shutsdown gracefully I feel wouldn't work as well as throwing an exception like invalid_argument - so the calling code has a chance to handle it locally to where it occurred. In my experience, when a process shutsdown on a safety critical system, we have a watchdog process that restarts it - of course it may just crash again; because we don't know which module or driver it occurred in the first time. (Code using pointers would be a common cause of such issues.) I always feel, code must check they pass as parameters are valid. Input validation is mandatory. However, the code that receives bad parameters must not crash if something sneaks through, it should gracefully throw something that can be handled. eg USB module failure on an embedded system (ie bad data, null pointer), we display "USB error" on the display, and disable the USB module until a technician arrives to check the logs. At least the heart rate monitor doesn't crash! A 3rd party library with such std::terminate() calls can't be used on a safety critical system. It's the same with assert() calling abort() on an !NDEBUG build, or enabling _GLIBCXX_ASSERTIONS I believe. A throw would be useful. Of course an assert core dump gives a more accurate backtrace in a debug build than just leaving it to SEGV. I was only looking at gsl::not_null, which has a runtime call to std::terminate() instead of a throw logic_error() or such. Edit: It feels safer to enable exceptions when constraints are not met on GSL, eg gsl::not_null, as the default behavior. I recall it was ALGOL that introduced the ability to disable runtime NULL checks to boost performance, so it seems logical to still allow disabling of exceptions, asserts, and std::terminate calls if that is what someone wants to do to save a few cycles. |
There have been discussions recently in #1512 and microsoft/GSL#831 regarding contract violation behavior. Would the editors mind clarifying the desired behavior for contract violation and updating GSL.assert as necessary?
The confusion stems from a potential contradiction between an
Editor's call
in issue #1512 and a note in the GSL.Assert section forExpects
.@hsutter's post from #1512:
Note from the
Expects
section of GSL.Assert:Microsoft's GSL implementation removed the alternative violation behaviors (throwing & unenforced) with PR microsoft/GSL#831. This change also had the side benefit of preventing potential ODR violations from misconfiguring contract violation behaviors.
The text was updated successfully, but these errors were encountered: