Defer available in gcc and clang

by r4um- gustedt.wordpress.com

Source

The article is a bit dense, but what it's announcing is effectively golang's `defer` (with extra braces) or a limited form of C++'s RAII (with much less boilerplate).

Both RAII and `defer` have proven to be highly useful in real-world code. This seems like a good addition to the C language that I hope makes it into the standard.

Probably closer to defer in Zig than in Go, I would imagine. Defer in Go executes when the function deferred within returns; defer in Zig executes when the scope deferred within exits.

This is the crucial difference. Scope-based is much better.

By the way, GCC and Clang have attribute((cleanup)) (which is the same, scope-based clean-up) and have done for over a decade, and this is widely used in open source projects now.

I wonder what the thought process of the Go designers was when coming up with that approach. Function scope is rarely what a user needs, has major pitfalls, and is more complex to implement in the compiler (need to append to an unbounded list).

Loading

Loading

I would like to second this.

In Golang if you iterate over a thousand files and

    defer File.close()
your OS will run out of file descriptors

Loading

defer was invented by Andrei Alexandrescu who spelled it scope(exit)/scope(failure) [Zig's errdefer]/scope(success) ... it first appeared in D 2.0 after Andrei convinced Walter Bright to add it.

> with extra braces

The extra braces appear to be optional according to the examples in https://www.open-std.org/JTC1/SC22/WG14/www/docs/n3734.pdf (see pages 13-14)

Both defer and RAII have proven to be useful, but RAII has also proven to be quite harmful in cases, in the limit introducing a lot of hidden control flow.

I think that defer is actually limited in ways that are good - I don't see it introducing surprising control flow in the same way.

Defer is also hidden control flow. At the end of every block, you need to read backwards in the entire block to see if a defer was declared in order to determine where control will jump to. Please stop pretending that defer isn't hidden control flow.

> RAII has also proven to be quite harmful in cases

The downsides of defer are much worse than the "downsides" of RAII. Defer is manual and error-prone, something that you have to remember to do every single time.

Loading

But of course what you call "surprising" and "hidden" is also RAII's strength.

It allows library authors to take responsibility for cleaning up resources in exactly one place rather than forcing library users to insert a defer call in every single place the library is used.

Loading

This certainly isn't RAII—the term is quite literal, Resource Acquisition Is Initialization, rather than calling code as the scope exits. This is the latter of course, not the former.

People often say that "RAII" is kind of a misnomer; the real power of RAII is deterministic destruction. And I agree with this sentiment; resource acquisition is the boring part of RAII, deterministic destruction is where the utility comes from. In that sense, there's a clear analogy between RAII and defer.

But yeah, RAII can only provide deterministic destruction because resource acquisition is initialization. As long as resource acquisition is decoupled from initialization, you need to manually track whether a variable has been initialized or not, and make sure to only call a destruction function (be that by putting free() before a return or through 'defer my_type_destroy(my_var)') in the paths where you know that your variable is initialized.

So "A limited form of RAII" is probably the wrong way to think about it.

Loading

Loading

[dead]

To be fair, RAII is so much more than just automatic cleanup. It's a shame how misunderstood this idea has become over the years

Can you share some sources that give a more complete overview of it?

I got out my 4e Stroustrup book and checked the index, RAII only comes up when discussing resource management.

Interestingly, the verbatim introduction to RAII given is:

> ... RAII allows us to eliminate "naked new operations," that is, to avoid allocations in general code and keep them buried inside the implementation of well-behaved abstractions. Similarly "naked delete" operations should be avoided. Avoiding naked new and naked delete makes code far less error-prone and far easier to keep free of resource leaks

From the embedded standpoint, and after working with Zig a bit, I'm not convinced about that last line. Hiding heap allocations seems like it make it harder to avoid resource leaks!

A long overdue feature.

Though I do wonder what the chances are that the C subset of C++ will ever add this feature. I use my own homespun "scope exit" which runs a lambda in a destructor quite a bit, but every time I use it I wish I could just "defer" instead.

Never, you can already do this with RAII, and naturally it would be yet another thing to complain about C++ adding features.

Then again, if someone is willing to push it through WG21 no matter what, maybe.

C++ implementations of defer are either really ugly thanks to using lambdas and explicitly named variables which only exist to have scoped object, or they depend on macros which need to have either a long manually namespaced name or you risk stepping on the toes of a library. I had to rename my defer macro from DEFER to MYPOROGRAM_DEFER in a project due to a macro collision.

C++ would be a nicer language with native defer. Working directly with C APIs (which is one of the main reasons to use C++ over Rust or Zig these days) would greatly benefit from it.

Loading

Loading

Various macro tricks have existed for a long time but nobody has been able to wrap the return statement yet. The lack of RAII-style automatic cleanups was one of the root causes for the legendary goto fail;[1] bug.

[1] https://gotofail.com/

I do not see how defer would have helped in this case.

Loading

Just hope those lambdas aren't throwing exceptions ;)

In many cases that's preferred as you want the ability to cancel the deferred lambda.

It’s pedantic, but in the malloc example, I’d put the defer immediately after the assignment. This makes it very obvious that the defer/free goes along with the allocation.

It would run regardless of if malloc succeeded or failed, but calling free on a NULL pointer is safe (defined to no-op in the C-spec).

I'd say a lot of users are going to borrow patterns from Go, where you'd typically check the error first.

    resource, err := newResource()
    if err != nil {
        return err
    }
    defer resource.Close()
IMO this pattern makes more sense, as calling exit behavior in most cases won't make sense unless you have acquired the resource in the first place.

free may accept a NULL pointer, but it also doesn't need to be called with one either.

This example is exactly why RAII is the solution to this problem and not defer.

Loading

Loading

It seems less pedantic and more unnecessarily dangerous due to its non uniformity: in the general case the resource won’t exist on error, and breaking the pattern for malloc adds inconsistency without any actual value gain.

Free works with NULL, but not all cleanup functions do. Instead of deciding whether to defer before or after the null check on a case-by-case basis based on whether the cleanup function handles NULL gracefully, I would just always do the defer after the null check regardless of which pair of allocation/cleanup functions I use.

3…2…1… and somebody writes a malloc macro that includes the defer.

Yes!! One step closer to having defer in the standard.

Related blog post from last year: https://thephd.dev/c2y-the-defer-technical-specification-its... (https://news.ycombinator.com/item?id=43379265)

Would defer be considered hidden control flow? I guess it’s not so hidden since it’s within the same function unlike destructors, exceptions, longjmp.

It's one of the most commonly adopted feature among C successor languages (D, Zig, Odin, C3, Hare, Jai); given how opinionated some of them are on these topics, I think it's safe to say it's generally well regarded in PL communities.

In Nim too, although, the author doesn't like it much.

What I always hated about defer is that you can simply forget or place it in the wrong position. Destructors or linear types (must-use-once types) are a much better solution.

It breaks the idea that statements get executed in the order they appear in the source code, but it ‘only’ moves and sometimes deduplicates (in functions with multiple exit points) statements, it doesn’t hide them.

Of course, that idea already isn’t correct in many languages; function arguments are evaluated before a function is called, operator precedence often breaks it, etc, but this moves entire statements, potentially by many lines.

Always gives me COMEFROM vibes.

Yes, defer is absolutely a form of hidden control flow, even if it's less egregious than exceptions.

I have a personal aversion to defer as a language feature. Some of this is aesthetic. I prefer code to be linear, which is to say that instructions appear in the order that they are evaluated. Further, the presence of defer almost always implies that there are resources that can leak silently.

I also dislike RAII because it often makes it difficult to reason about when destructors are run and also admits accidental leaks just like defer does. Instead what I would want is essentially a linear type system in the compiler that allows one to annotate data structures that require cleanup and errors if any possible branches fail to execute the cleanup. This has the benefit of making cleanup explicit while also guaranteeing that it happens.

If you dislike things happening out of lexical order, I expect must already dislike C because of one of its many notorious footguns, which is that the evaluation order of function arguments is implementation-defined.

About RAII, I think your viewpoint is quite baffling. Destructors are run at one extremely well-defined point in the code: `}`. That's not hard to reason about at all. Especially not compared to often spaghetti-like cleanup tails. If you're lucky, the team does not have a policy against `goto`.

> I have a personal aversion to defer as a language feature.

Indeed, `defer` as a language feature is an anti-pattern.

It does not allow the abstraction of initialization/de-initialization routines and encapsulating their execution within the resources, transferring the responsibility to manually perform the release or de-initialization to the users of the resources - for each use of the resource.

> I also dislike RAII because it often makes it difficult to reason about when destructors are run [..]

RAII is a way to abstract initialization, it says nothing about where a resource is initialized.

When combined with stack allocation, now you have something that gives you precise points of construction/destruction.

The same can be said about heap allocation in some sense, though this tends to be more manual and could also involve some dynamic component (ie, a tracing collector).

> [..] and also admits accidental leaks just like defer does.

RAII is not memory management, it's an initialization discipline.

> [..] what I would want is essentially a linear type system in the compiler that allows one to annotate data structures that require cleanup and errors if any possible branches fail to execute the cleanup. This has the benefit of making cleanup explicit while also guaranteeing that it happens.

Why would you want to replicate the same cleanup procedure for a certain resource throughout the code-base, instead of abstracting it in the resource itself?

Abstraction and explicitness can co-exist. One does not rule out the other.

The Linux kernel has been using __attribute___((cleanup)) for a little while now. So far, I've only seen/used it in cases where the alternative (one goto label) isn't very bad. Even there it's basically welcome.

But there are lots of cases in the kernel where we have 10+ goto labels for error paths in complex setup functions. I think when this starts making its way into those areas it will really start having an impact on bugs.

Sure, most of those bugs are low impact (it's rare that an attacker can trigger the broken error paths) but still, this is basically free software quality, it would be silly to leave it on the table.

And then there's the ACTUAL motivation: it makes the code look nicer.

In C I just used goto - you put a cleanup section at the bottom of your code and your error handling just jumps to it.

  #define RETURN(x) result=x;goto CLEANUP

  void myfunc() {
    int result=0;
    if (commserror()) {
      RETURN(0);
    }
     .....
    /* On success */
    RETURN(1);

    CLEANUP:
    if (myStruct) { free(myStruct); }
    ...
    return result
  }
The advantage being that you never have to remember which things are to be freed at which particular error state. The style also avoids lots of nesting because it returns early. It's not as nice as having defer but it does help in larger functions.

> The advantage being that you never have to remember which things are to be freed at which particular error state.

You also don't have to remember this when using defer. That's the point of defer - fire and forget.

One small nitpick: you don't need check before `free` call, using `free(NULL)` is fine.

You're right that it's not needed in my example but sometimes the thing that you're freeing has pointers inside it which themselves have to be freed first and in that case you need the if.

There are several other issues I haven't shown like what happens if you need to free something only when the return code is "FALSE" indicating that something failed.

This is not as nice as defer but up till now it was a comparatively nice way to deal with those functions which were really large and complicated and had many exit points.

Loading

But it does keep one in the habit of using NULL checks.

Loading

The disadvantage is that a "goto fail" can easily happen with this approach. And it actually had happened in the wild.

defer is a stack, scope local and allows cleanup code to be optically close to initialization code.

This looks like a recipe for disaster when you'll free something in the return path that shouldn't be freed because it's part of the function's result, or forget to free something in a success path. Just write

    result=x;
    goto cleanup;
if you meant

    result=x;
    goto cleanup;
At least then you'll be able to follow the control flow without remembering what the magic macro does.

In your cleanup method you have to take the same care of parameters that you are putting results into as any other way you can deal with this. All it does is save you from repeating such logic at all the exit points.

But you have to give this cleanup jump label a different name for every function.

You don't. Labels are local to functions in C.

Can somebody explain why this is significantly better than using goto pattern?

Genuinely curious as I only have a small amount of experience with c and found goto to be ok so far

I feel like C people, out of anyone, should respect the code gen wins of defer. Why would you rely on runtime conditional branches for everything you want cleaned up, when you can statically determine what cleanup functions need to be called?

In any case, the biggest advantage IMO is that resource acquisition and cleanup are next to each other. My brain understands the code better when I see "this is how the resource is acquired, this is how the resource will be freed later" next to each other, than when it sees "this is how this resource is acquired" on its own or "this is how the resource is freed" on its own. When writing, I can write the acquisition and the free at the same time in the same place, making me very unlikely to forget to free something.

Defer might be better than nothing, but it's still a poor solution. An obvious example of a better, structural solution is C#'s `using` blocks.

    using (var resource = acquire()) {

    } // implicit resource.Dispose();
While we don't have the same simplicity in C because we don't use this "disposable" pattern, we could still perhaps learn something from syntax and use a secondary block to have scoped defers. Something like:

    using (auto resource = acquire(); free(resource)) {

    } // free(resource) call inserted here.
That's no so different to how a `for` block works:

    for (auto it = 0; it < count; it++) {

    } // automatically inserts it++; it < count; and conditional branch after secondary block of for loop.
A trivial "hack" for this kind of scoped defer would be to just wrap a for loop in a macro:

    #define using(var, acquire, release) \
        auto var = (acquire); \
        for (bool var##_once = true; var##_once; var##_once = false, (release))

    using (foo, malloc(szfoo), free(foo)) {
        using (bar, malloc(szbar), free(bar)) {
            ...
        } // free(bar) gets called here.
    } // free(foo) gets called here.

Loading

>"this is how the resource is acquired, this is how the resource will be freed later"

Lovely fairy tale. Now can you tell me how you love to scroll back and examine all the defer blocks within a scope when it ends to understand what happens at that point?

Loading

It allows you to put the deferred logic near the allocation/use site which I noticed was helpful in Go as it becomes muscle memory to do cleanup as you write some new allocation and it is hinted by autocomplete these days.

But it adds a new dimension of control flow, which in a garbage collected language like Go is less worrisome whereas in C this can create new headaches in doing things in the right order. I don't think it will eliminate goto error handling for complex cases.

The advantage is that it automatically adds the cleanup code to all exit paths, so you can not forget it for some. Whether this is really that helpful is unclear to me. When we looked at defer originally for C, Robert Seacord hat a list of examples and how the looked before and after rewriting with defer. At that point I lost interest in this feature, because the new code wasn't generally better in my opinion.

But people know it from other languages, and seem to like it, so I guess it is good to have it also in C.

Thanks, seems like the document is this one

http://robertseacord.com/wp/2020/09/10/adding-a-defer-mechan...

Loading

Confer the recent bug related to goto-error handling in OpenSSH where the "additional" error return value wasn’t caught and allowed a security bypass accepting a failed key.

Cleanup is good. Jumping around with "goto" confused most people in practice. It seems highly likely that most programmers model "defer" differently in their minds.

EDIT:

IIRC it was CVE-2025-26465. Read the code and the patch.

Loading

1. Goto pattern is very error-prone. It works until it doesn't and you have a memory leak. The way I solved this issue in my code was a macro that takes a function and creates an object that has said function in its destructor.

2. Defer is mostly useful for C++ code that needs to interact with C API because these two are fundamentally different. C API usually exposes functions "create_something" and "destroy_something", while the C++ pattern is to have an object that has "create_something" hidden inside its constructor, and "destroy_something" inside its destructor.

I found that some error prone cases are harder to express with defer in zig.

For example if I have a ffi function that transfers the ownership of some allocator in the middle of the function.

A related article discussing Gustedt’s first defer implementation, which also looks at the generated assembly:

https://oshub.org/projects/retros-32/posts/defer-resource-cl...

I’m just going to start teaching classes of C programming to university first-year CS students. Would you teach `defer` straight away to manage allocated memory?

My suggestion is no - first have them do it the hard way. This will help them build the skills to do manual memory management where defer is not available.

Once they do learn about defer they will come to appreciate it much more.

In university? No, absolutely not straight away.

The point of a CS degree is to know the fundamentals of computing, not the latest best practices in programming that abstract the fundamentals.

Loading

It's still only in a TS, not in ISO C, if that matters.

No, but also skip malloc/free until late in the year, and when it comes to heap allocation then don't use example code which allocates and frees single structs, instead introduce concepts like arena allocators to bundle many items with the same max lifetime, pool allocators with generation-counted slots and other memory managements strategies.

Loading

No. They need to understand memory failures. Teach them what it looks like when it's wrong. Then show them the tools to make things right. They'll never fully understand those tools if they don't understand the necessity of doing the right thing.

IMHO, it is in the best interest of your students to teach them standard C first.

Loading

Loading

If you're teaching them to write an assembler, then it may be worth teaching them C, as a fairly basic language with a straightforward/naive mapping to assembly. But for basically any other context in which you'd be teaching first-year CS students a language, C is not an ideal language to learn as a beginner. Teaching C to first-year CS students just for the heck of it is like teaching medieval alchemy to first-year chemistry students.

Loading

Loading

Loading

As others have commented already: if you want to use C++, use C++. I suspect the majority of C programmers neither care nor want stuff like this; I still stay with C89 because I know it will be portable anywhere, and complexities like this are completely at odds with the reason to use C in the first place.

I would say the complexity of implementing defer yourself is a bit annoying for C. However defer itself, as a language feature in a C standard is pretty reasonable. It’s a very straightforward concept and fits well within the scope of C, just as it fit within the scope of zig. As long as it’s the zig defer, not the golang one…

I would not introduce zig’s errdeferr though. That one would need additional semantics changes in C to express errors.

Loading

> I still stay with C89 because I know it will be portable anywhere

With respect, that sounds a bit nuts. It's been 37 years since C89; unless you're targeting computers that still have floppy drives, why give up on so many convenience features? Binary prefixes (0b), #embed, defined-width integer types, more flexibility with placing labels, static_assert for compile-time sanity checks, inline functions, declarations wherever you want, complex number support, designated initializers, countless other things that make code easier to write and to read.

Defer falls in roughly the same category. It doesn't add a whole lot of complexity, it's just a small convenience feature that doesn't add any runtime overhead.

Loading

I think a lot of the really old school people don't care, but a lot of the younger people (especially those disillusioned with C++ and not fully enamored with Rust) are in fact quite happy for C to evolve and improve in conservative, simple ways (such as this one).

> still stay with C89

You're missing out on one of the best-integrated and useful features that have been added to a language as an afterthought (C99 designated initialization). Even many moden languages (e.g. Rust, Zig, C++20) don't get close when it comes to data initialization.

Loading

Loading

Not necessarily. In classic C we often build complex state machines to handle errors - especially when there are many things that need to be initialized (malloced) one after another and each might fail. Think the infamous "goto error".

I think defer{} can simplify these flows sometimes, so it can indeed be useful for good old style C.

That ship has sailed. Lots of open source C projects already use attribute((cleanup)) (which is the same thing).

Defer is a very simple feature where all code is still clearly visible and nothing is called behind your back. I write a lot of C++, and it's a vastly more complex language than "C with defer". Defer is so natural to C that all compilers have relatively broadly non-standard ways of mimicking it (e.g __attribute__((cleanup)).

If you want to write C++, write C++. If you want to write C, but want resource cleanup to be a bit nicer and more standard than __attribute__((cleanup)), use C with defer. The two are not comparable.

Isn’t goto cleanup label good enough anyway?

Goto approach also covers some more complicated cases

Then why not even better, K&R C with external assembler, that is top. /s

Loading

I quite dislike the defer syntax. IMO the cleanup attribute is the much nicer method of dealing with RAII in C.

I think C should be reset to C89 and then go over everything including proposals and accept only the good&compatible bits.

If you can't compile K&R, you should label your language "I can't believe it's not C!".

I don't have time to learn your esolang.

Such addition is great. But there is something even better - destructors in C++. Anyone who writes C should consider using C++ instead, where destructors provide a more convenient way for resources freeing.

C++ destructors are implicit, while defer is explicit.

You can just look at the code in front of you to see what defer is doing. With destructors, you need to know what type you have (not always easy to tell), then find its destructor, and all the destructors of its parent classes, to work out what's going to happen.

Sure, if the situation arises frequently, it's nice to be able to design a type that "just works" in C++. But if you need to clean up reliably in just this one place, C++ destructors are a very clunky solution.

Loading

Desctructors are only comparable when you build an OnScopeExit class which calls a user-provided lambda in its destructor which then does the cleanup work - so it's more like a workaround to build a defer feature using C++ features.

The classical case of 'one destructor per class' would require to design the entire code base around classes which comes with plenty of downsides.

> Anyone who writes C should consider using C++ instead

Nah thanks, been there, done that. Switching back to C from C++ about 9 years ago was one of my better decisions in life ;)

I think destructors are different, not better. A destructor can’t automatically handle the case where something doesn’t need to be cleaned up on an early return until something else occurs. Also, destructors are a lot of boilerplate for a one-off cleanup.

Loading

i write C++ every day (i actually like it...) but absolutely no one is going to switch from C to C++ just for dtors.

Loading

Loading

Loading

Loading

For the cases where a destructor isn’t readily available, you can write a defer class that runs a lambda passed to its constructor in its destructor, can’t you?

Would be a bit clunky, but that can (¿somewhat?) be hidden in a macro, if desired.

[dead]

[dead]

I took some shit in the comments yesterday for suggesting "you can do it with a few lines of standard C++" to another similar thread, but yet again here we are.

Defer takes 10 lines to implement in C++. [1]

You don't have to wait 50 years for a committee to introduce basic convenience features, and you don't have to use non-portable extensions until they do (and in this case the __attribute__((cleanup)) has no equivalent in MSVC), if you use a remotely extensible language.

[1] https://www.gingerbill.org/article/2015/08/19/defer-in-cpp/

Why is this a relevant comment? We're talking about C, not C++. If you wanted to suggest using an alternative language, you're probably better off recommending Zig: defer takes 0 lines to implement there, and it's closer to C than what C++ is.

Loading