Hacker Newsnew | past | comments | ask | show | jobs | submit | terrymah's commentslogin

No, calling throw in a noexcept function is a defined behavior (call std::terminate), and that behavior is not a diagnostic

I think maybe WG21 was concerned a compiler engineer would be clever if throwing in noexcept were UB, for example and assume any block that throws is unreachable and could just be removed along with all blocks it postdominates. Compiler guys love optimizations that just remove code. The fastest and smallest code is code that can’t run and doesn’t exist


Compilers are allowed to and do diagnose defined but undesirable behavior though, even more so if that's only enabled with an option.


Until yesterday I thought I was the only person in the world who thought the designed behavior was undesirable, though


Dude I am going to blow your mind


Eeeermm... I haven't heard an explosion, but 4+ hours has passed. Are you alive there, dude?

Jokes aside - I'm always happy to be corrected, so please go on if I made a mistake somewhere.


I think their point was that the halting problem is NP hard, it is in fact undecidable. Since there is no algorithm that can solve it, there is no point in talking about complexity of that algorithm.

Alternatively, Java shows that it is very much possible to do this - the compiler can enforce that a function can't throw exceptions (limited to checked exceptions in Java, but that is besides the point). The way to do it is easy: just check that it doesn't call throw directly, and that all of the functions it calls are also marked noexcept. No need to explore things any deeper than that.

Of course, the designers of C++ didn't want to impose this restriction for various reasons.


Determining if a function throws is a pretty basic bit of information collected in bottom up codegen (or during pre pass of a whole program optimization) and in no sense NP hard. Compilers have been doing it for decades and it’s useful

Noexcept on the surface is useful, except for the terminate guarantee, which requires a ton of work to avoid metadata size growth and hurts inlining. If violations of noexcept were UB and it was a pure optimization hint the world would be much better


Interestingly, AUTOSAR C++14 Guidance (https://www.autosar.org/fileadmin/standards/R22-11/AP/AUTOSA...) had a "Rule A15-4-4 (required, implementation, automated) A declaration of non-throwing function shall contain noexcept specification." which was thankfully removed in MISRA C++2023 (the latest guidance for C++17, can't give a link, it's a paid document), - it mandates it only for a few special functions (destructors, move constructors/assignments and a few more).


Yes, true, thanks. I confused "will it throw given inputs&state?" with "can it potentially throw?".

I wonder, why compilers don't expose that information? Some operator returning tri-state "this code provably doesn't throw | could throw | can't see, break the compilation" could help writing generic code immensely. Instead we have to resolve to multi-storey noexcept() operator inside a noexcept qualifier which is very detrimental for the code readability...


You very likely can't always answer the question "will this function throw?", but it should be relatively easy to identify the subset of function that recursively call functions that are guaranteed not to throw. That's only a subset of all non-throwing functions of course.


Yes, my mistake was in confusion a run-time question "will it throw?" with "can it throw in theory?". The latter if I'm not mistaken again only requires a throw statement somewhere in a non-dead code, which is totally possible to find out for a code compiler could see.


No, we compile in bottom up order, starting with leaf functions, and collecting information about functions as we go. So "not throwing" sort of trickles up when possible to a certain degree.

In LTCG (MSVC)/O3 (GCC/Clang) there are prepasses over the entire callgraph to collect this order


Yes of course. Sometimes the compiler can tell. But the original question feels to me more like “Shouldn't the compiler deduce restrict for you?”


It absolutely does, and even better, the compiler deduced "this function doesn't throw" doesn't come with the overhead of implementing noexcept proper


You can't just look at the codegen of the function itself, you also have to consider the metadata, and the overhead of processing any metadata

Specifically here (as I said in other comments) where it goes from complicated/quality of implementation issue to "shit this is complicated" is when you consider inlining. If noexcept inhibits inlining in any conceivable circumstances then it's having a dramatic (slightly indirect) impact on performance


In MSVC we've also pretty heavily optimized the whole function case such that we no longer have a literal try/catch block around it (I think there is a single bit in our per function unwind info that the unwinder checks and kills the program if it encounters while unwinding). One extra branch but no increase in the unwind metadata size

The inlining case was always the hard problem to solve though


Oh man, don't get me started. This was a point in a talk I gave years ago called "Please Please Help the Compiler" (what I thought was a clever cut at the conventional wisdom at the time of "Don't Try to Help the Compiler")

I work on MSVC backend. I argued pretty strenuously at the time that noexcept was costly and being marketed incorrectly. Perhaps the costs are worth it, but none the less there is a cost

The reason is simple: there is a guarantee here that noexcept functions don't throw. std::terminate has to be called. That has to be implemented. There is some cost to that - conceptually every noexcept function (or worse, every call to a noexcept function) is surrounded by a giant try/catch(...) block.

Yes there are optimizations here. But it's still not free

Less obvious; how does inlining work? What happens if you inline a noexcept function into a function that allows exceptions? Do we now have "regions" of noexceptness inside that function (answer: yes). How do you implement that? Again, this is implementable, but this is even harder than the whole function case, and a naive/early implementation might prohibit inlining across degrees of noexcept-ness to be correct/as-if. And guess what, this is what early versions of MSVC did, and this was our biggest problem: a problem which grew release after release as noexcept permeated the standard library.

Anyway. My point is, we need more backend compiler engineers on WG21 and not just front end, library, and language lawyer guys.

I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc). The reaction to my suggestion was not positive.


Oh, cool! I googled myself and someone actually archived the slides from the talk I gave. I think it holds up pretty well today

https://github.com/TriangleCppDevelopersGroup/TerryMahaffeyC...

*edit except the stuff about fastlink

*edit 2 also I have since added a heuristic bonus for the "inline" keyword because I could no longer stand the irony of "inline" not having anything to do with inlining

*edit 3 ok, also statements like "consider doing X if you have no security exposure" haven't held up well


Props for the edits ;)

I would be very interested in an updated blog post on this if you felt so inclined!


> Anyway. My point is, we need more backend compiler engineers on WG21 and not just front end, library, and language lawyer guys.

Even better, the current way of working is broken, WG21 should only discuss papers that come with a preview implementation, just like in other language ecosystems.

We have had too many features being approved with "on-paper only" designs, to be proven a bad idea when they finally got implemented, some of which removed/changed in later ISO revisions, that already prove the point this isn't working.


> I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc).

Do you know if the reasoning for originally switching noexcept violations from UB to calling std::terminate was documented anywhere? The corresponding meeting minutes [0] describes the vote to change the behavior but not the reason(s). There's this bit, though:

> [Adamczyk] added that there was strong consensus that this approach did not add call overhead in quality exception handling implementations, and did not restrict optimization unnecessarily.

Did that view not pan out since that meeting?

[0]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n30...


I think WG21 has been violently against adding additional UB to the language, because of some hacker news articles a decade ago about people being alarmed at null pointer checks being elided or things happening that didn’t match their expectation in signed int overflow or whatever. Generally it seems a view of spread that compiler implementers view undefined behavior as a license to party, that we’re generally having too much fun, and are not to be trusted.

In reality undefined behavior is useful in the sense that (like this case) it allows us to not have to write code to consider and handle certain situations - code which may make all situations slower, or allows certain optimizations to exist which work 99% of the time.

Regarding “not pan out”: I think the overhead of noexcept for the single function call case is fine, and inlining is and has always been the issue.


> I think WG21 has been violently against adding additional UB to the language, because of some hacker news articles a decade ago about people being alarmed at null pointer checks being elided or things happening that didn’t match their expectation in signed int overflow or whatever.

Huh, didn't expect the no-UB sentiment to have extended that far back!

> Regarding “not pan out”: I think the overhead of noexcept for the single function call case is fine, and inlining is and has always been the issue.

Do you know if the other major compilers also face similar issues?


Things are much better in 2024 in MSVC than they were in 2014. The overhead today is mostly the additional metadata associated with tracking the state, and most of the inline compatibilities were worked through (with a ton of work by the compiler devs). So it's a binary size issue. We've even been working on that (I remember doing work to combine adjacent identical regions, etc). Not sure what the status is in GCC/LLVM today.

I'm just a little sore about it because it was being sold as a "hey here is an optimization!" and it very much was not, at least from where I was sitting. I thought this was a very very good case of having it be UB (I think the entire class of user source annotations like this should be UB if the runtime behavior violates the user annotation)


Sounds promising!

Do you think optimizations could eventually bring the std::terminate version of noexcept near/up to par with a hypothetical UB noexcept, or do you think that at least some overhead will always be present?


Could the UB version of noexcept be provided as a compiler extension? Either a separate attribute or a compiler flag to switch the behavior would be fine.


We had a UB version of noexcept for a very, very long time. __declspec(nothrow), the throw() function specifier, etc.


> did not restrict optimization unnecessarily.

well clearly there is a cost


It's kinda funny that C++ even in recent editions generally reaches for the UB gun to enable optimizations, but somehow noexcept ended up to mean "well actually, try/catch std::terminate". I bet most C++-damaged people would expect throwing in a noexcept function to simply be UB and potentially blow their heap off or something instead of being neatly defined behavior with invisible overhead.


Probably the right thing for noexcept would be to enforce a "noexcept may only call noexcept methods", but that ship has sailed. I also understand that it would necessarily create the red/green method problem, but that's sort of unavoidable.


Unless you're C++-damaged enough to assume it's one of those bullshit gaslighting "it might actually not do anything lol" premature optimization keywords, like `constexpr`.


`inline` is my favorite example of this. It's a "This does things, not what you think it does, and also it's not used for what you think it is. Don't use it".


> I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc). The reaction to my suggestion was not positive.

So instead of helping programmers actually write noexcept functions, you wanted to make this an even bigger footgun than it already is? How often are there try/catch blocks that are actually elideable in real-world code? How much performance would actually be gained by doing that, versus the cost of all of the security issues that this feature would introduce?

If the compiler actually checked that noexcept code can't throw exceptions (i.e. noexcept functions were only allowed to call other noexcept functions), and the only way to get exceptions in noexcept functions was calls to C code which then calls other C++ code that throws, then I would actually agree with you that this would have been OK as UB (since anyway there are no guarantees that even perfectly written C code that gets an exception wouldn't leave your system in a bad state). But with a feature that already relies on programmer care, and can break at every upgrade of a third party library, making this UB seems far too dangerous for far too little gain.


Added to my list why I compile with -fno-exceptions


-fno-exceptions only prevents you from calling throw. If you don't want overhead likely you want -fno-asynchronous-unwind-tables + that clang flag that specifies that extern "C" functions don't throw


I'm pretty sure I could see a roughly 10% binary size decrease in my C++ projcts just by setting -fno-exceptions, and that was for C++ code that didn't use exceptions in the first place, so there must be more to it then just forbidding throw. Last time I tinkered with this stuff was around 2017 though.


You do not need unwind tables for noexcept functions, that can be a significant space saving.


sure but the unwind flags don’t prevent optimizations like exceptions


Looking how the docs for https://gcc.gnu.org/onlinedocs/gcc/Code-Gen-Options.html#ind...

How does that affect code generation? It reads as only effecting debug information and binary size to my untrained eyes.


And based on a few clang discourse threads, it only removes .eh_frame

I think this only effects binary size, which I understand smaller binaries can load faster but not being able to get stacktraces for debuggers and profilers seems like a loss


> there is a guarantee here that noexcept functions don't throw. std::terminate has to be called. That has to be implemented

Could you elaborate on how this causes more overhead than without noexcept? The fact that something has to be done when throwing an exception is true in both cases, right?. Naively it'd seem like without noexcept, you raise the exception; and with noexcept, you call std::terminate instead. Presumably the compiler is already moving your exception throwing instructions off the happy hot path.

Very very basic test with Clang: https://godbolt.org/z/6aqWWz4Pe Looks like both variations have similar code structure, with 1 extra instruction for noexcept.


Pick a different architecture - anything 32bit. Exception handling on 64bit windows works differently, where the overhead is in the PE headers instead of asm directly (and is in general lower). You don't have the setup and teardown in your example

Throwing exception has the same overhead in both cases. In case of noexcept function, the function has to (or used to have, depending on architecture setup an exception handling frame and remove it when leaving.

>Naively it'd seem like without noexcept, you raise the exception; and with noexcept, you call std::terminate instead

Except you may call a normal function from a noexcept function, and this function may still raise an exception.


If you're on one of the platforms with sane exception handling, it's a matter of emitting different assembly code for the landing pad so that when unwinding it calls std::terminate instead of running destructors for the local scope. Zero additional overhead. If you're on old 32-bit Microsoft Windows using MSVC 6 or something, well, you might have problems. One of the lesser ones being increased overhead for noexcept.


> Zero additional overhead.

It's zero runtime overhead in the good case but still has an executable size overhead for functions that previously did not need to run any destructors.


Very true. Then again, if you don't need to tear down anything (ie. run destructors) during error handling you're either not doing any error handling or you're not doing any useful work.


I’m curious: where does the overhead of try/catch come from in a “zero-overhead” implementation?

Is it just that it forces the stack to be “sufficiently unwindable” in a way that might make it hard to apply optimisations that significantly alter the structure of the CFG? I could see inlining and TCO being tricky perhaps?

Or does Windows use a different implementation? Not sure if it uses the Itanium ABI or something else.


Everyone keeps scanning over the inlining issues, which I think are much larger

“Zero overhead” refers to the actual functions code gen; there are still tables and stuff that have to be updated

Our implementation of noexcept for the single function case I think is fine now. There is a single extra bit in the exception function info which is checked by the unwinder. Other than requiring exception info in cases where we otherwise wouldn’t

The inlining case has always been both more complicated and more of a problem. If your language feature inhibits inlining in any situation you have a real problem


Doesn't every function already need exception unwinding metadata? If the function is marked noexcept, then can't you write the logical equivalent of "Unwinding instructions: Don't." and the exception dispatcher can call std::terminate when it sees that?


I assume the /EHr- flag was introduced to mitigate this, right?


Nah that was mostly about extern "C" functions which technically can't throw (so the noexcept runtime stuff would be optimized out) but in practice there is a ton of code marked extern "C" which throws


> in practice there is a ton of code marked extern "C" which throws

Obviously, a random programmer could do any evil things, but does that apply to standard code, such as C standard library used from C++?


Yes, qsort and bsearch can throw.


jfc... Can you give more info how did you learn this and to which lib implementation this applies?


Well, given that qsort and bsearch take a function pointer and call it, that function pointer can easily point to a function that throws. So I think this applies to all implementations of qsort and bsearch. Especially since there is no way to mark a function pointer as noexcept.


> Especially since there is no way to mark a function pointer as noexcept.

There is, noexcept is part of the type since C++17. In fact, I prefer noexcept function pointer parameters for C library wrappers, as I don't expect most libraries written in C to deal with stack unwinding at all.


Oops, you're right, I hadn't checked that this had changed in C++ 17.


https://eel.is/c++draft/alg.c.library#4

Any library implementation that is C++ compliant must implement this. I'm pretty sure that libstdc++ + glibc is compliant, assuming sane glibc compiler options.


but to me these are - again - a user induced problems. I'm interested if a user doesn't do stupid things, should they still afraid that a standard extern C code could throw? Say, std::sprintf() which if I'm not mistaken boils down to C directly? Are there cases where C std lib could throw without a "help from a user"?


I don't think anything in the C parts of the C++ standard library throws its own exceptions. However, it's not completely unreasonable for a third party C library to use a C++ library underneath, and that might propagate any exceptions that the C++ side throws. This would be especially true if the C library were designed with some kind of plugin support, and someone supplied a C++ plugin.


https://eel.is/c++draft/res.on.exception.handling#2

I general functions from the C standard library must not throw, except for the ones taking function pointers that they execute.


perfect again, thank you!


a perfect answer, thank you!


extern "C" seems related to the other flags, not 'r'?


Well, yeah, things can be related to many things, but throwing extern "C"s was one of the motivations as I recall for 'r'. r is about a compiler optimization where we elide the runtime terminate check if we can statically "prove" a function can never throw. To prove it statically we depend on things like extern "C" functions not throwing, even though users can (and do) totally write that code.


I, too, came for the procedurally generated Doom levels and left disappointed


From direct effects, maybe. But no one is immune from the second order effects. Once our customers start going out of business because they are directly exposed, they can't buy our software anymore.


It was a joint project, with ms eventually pulling out to work on NT


NT was originally known as OS/2 3.0.


Pre-warp then. Those WARP commercials made me really want it when I was a kid, even though they (on purpose) never communicated what was so amazingly bout it.


Ah yes, that's true. Totally forgot about that. Thanks!


Indeed, that OS/2 2.0 fiasco went so badly that it is one of the favorite topics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: