How to Stop Linux Threads Cleanly
Posted3 months agoActive2 months ago
mazzo.liTechstoryHigh profile
calmmixed
Debate
60/100
Linux ThreadsThread CancellationConcurrency
Key topics
Linux Threads
Thread Cancellation
Concurrency
The article discusses the challenges of stopping Linux threads cleanly, and the discussion revolves around various approaches to thread cancellation, highlighting the trade-offs and complexities involved.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5d
Peak period
44
120-132h
Avg / period
21.3
Comment distribution85 data points
Loading chart...
Based on 85 loaded comments
Key moments
- 01Story posted
Oct 15, 2025 at 3:28 AM EDT
3 months ago
Step 01 - 02First comment
Oct 20, 2025 at 11:47 AM EDT
5d after posting
Step 02 - 03Peak activity
44 comments in 120-132h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 21, 2025 at 5:08 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45589156Type: storyLast synced: 11/20/2025, 5:11:42 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Hopefully this improves eventually? Who knows?
[1] https://devblogs.microsoft.com/oldnewthing/20150814-00/?p=91...
[2] https://devblogs.microsoft.com/oldnewthing/20191101-00/?p=10...
[3] https://devblogs.microsoft.com/oldnewthing/20140808-00/?p=29...
there are a lot more, I'm not linking them all here.
[1] https://en.cppreference.com/w/cpp/thread/thread/~thread.html
[2] https://devblogs.microsoft.com/oldnewthing/20120105-00/?p=86...
(And what other kind of boolean is there, besides atomic? It's either true or it's false, and if nothing can set it back to false once it goes true, I don't see the hazard. It's a CPU, not an FPGA.)
Without memory order guarantees enforced by memory barriers, a write to the boolean in thread A is not guaranteed to be observed by thread B. That matters both after initialization--where thread A sets the boolean to false but thread B may observe true, false, or invalid--and also after the transition--where thread B may fail to observe that the boolean has flipped from false to true.
[edit: I'm not sure the above reasoning actually matters; as stated already by parent, "It's a CPU, not an FPGA"; modern multicore shared-memory CPUs have coherent caches]
No, that's not correct. Memory ordering doesn't influence how fast a write is propagated to other cores, that's what cache coherency is for. Memory ordering of an access only matters in relation to accesses on other memory locations. There's a great introduction to this topic by Mara Bos: https://marabos.nl/atomics/memory-ordering.html
There are hypothetical, historical, and special-purpose architectures which don't have cache coherency (or implement it differently enough to matter here), but for all practical purposes, it seems that all modern, general-purpose architectures implement it.
a simple clear loop that looks for a requested stop flag with a confirmed stop flag works pretty well. this can be built into a synchronous "stop" function for the caller that sets the flag and then does a timed wait on the confirmation (using condition variables and pthread_cond_timedwait or waitforxxxobject if you're on windows).
The examples in this article IIRC were something like this.
You're still going to be arbitrarily delayed if do_stuff() (or one one of its callees, maybe deep inside the stack) delays, or the sleep call.If you can't accept this, maybe don't play with threads, they are dangerous.
i think on windows you can wait on both the sockets/file descriptors and condition variables with the same waitforxxxobject blocking mechanism. on linux you can do libevent, epoll, select or pthread_cond_timedwait. all of these have "release on event or after timeout" semantics. you can use eventfd to combine them.
i would not ever recommend relying on signals and writing custom cleanup handlers for them (!).
unless they're blocked waiting for an external event, most system calls tend to return in a reasonable amount of time. handle the external event blocking scenario (stuff that select waits for) and you're basically there. moreover, if you're looking to exit cleanly, you probably don't want to take your chances interrupting syscalls with signals (!) anyway.
> If you can't accept this, maybe don't play with threads, they are dangerous.
too late. when i first started playing with threads, linux didn't really support them.
Not incompatible with what I said.
> with a timeout to keep an eye on an exit flag
This is the stupid part. You will burn CPU cycles waking up spuriously for timeouts with no work to do. Setting the flag won't wake up the event loop until the timeout hits, adding pointless delay.
You want to make signalling an exit to actually wake up your event loop. Then you also don't need a timeout.
I.e. you should make your "ask to exit" code use the same wakeup mechanism as the work queue, which is what I said at the beginning. Not burning CPU polling a volatile bool in memory on the side.
This is exactly what condwait + condsignal do.
it's the smart part. waking up at 50hz or 100hz is essentially free and if there's an os bug or other race that causes the one time "wake up to exit" event to get lost, the system will still be able to shut down cleanly with largely imperceptible delay. it also means that it can be ported to systems that don't support combined condition variable/fd semantics.
It is not, actually. This extremely simple protocol is race-free.
Without the signaling thread acquiring a mutex, you might end up signaling after T2 has checked the boolean, but before it has called cond_wait.
But this can be solved by processing the async signal in a deferred manner from some other watcher thread
You could maybe allow a queue skipping feature to be used for stop messages... But if it's only for stop messages, set an atomic bool stop, then send a stop message. If the thread just misses the stop bool and waits for messages, you'll get the stop message; if the queue is large, you'll get the stop bool.
ps, hi
This is a race condition. When you "spin" on a condition variable, the stop flag you check must be guarded by the same mutex you give to cond_wait.
See this article for a thorough explanation:
https://zeux.io/2024/03/23/condvars-atomic/
If there only was a way to stop while loop without having to use extra conditional with break...
The right approach is to avoid simple syscalls like sleep() or recv(), and instead call use multiplexing calls like epoll() or io_uring(). These natively support being interrupted by some other thread because you can pass, at minimum, two things for them to wait for: the thing you're actually interested in, and some token that can be signalled from another thread. For example, you could start a unix socket pair which you do a read wait on alongside the real work, then write to it from another thread to signal cancellation. Of course, by the time you're doing that you really could multiplex useful IO too.
You also need to manually check this mechanism from time to time even if you're doing CPU bound work.
If you're using an async framework like asyncio/Trio in Python or ASIO in C++, you can request a callback to be run from another other thread (this is the real foothold because it's effectively interrupting a long sleep/recv/whatever to do other work in the thread) at which point you can call cancellation on whatever IO is still outstanding (e.g. call task.cancel() in asyncio). Then you're effectively allowing this cancellation to happen at every await point.
(In C# you can pass around a CancellationToken, which you can cancel directly from another thread to save that extra bit of indirection.)
This is probably the cleanest solution that is portable.
But I also disagree with it. Yes, the logical conclusion of starting down that path is that you end up with full on use of coroutines and some IO framework (though I don't see the problem with that). But a simple wrapper for individual calls that is recv+cancel rather than just recv etc is better than any solution mentioned in the blog post.
The fact is, if you want to wait for more than one thing at once at the syscall level (in this case, IO + inter thread cancellation), then the way to do that is to use select or poll or something else actually designed for that.
>> It’s quite frustrating that there’s no agreed upon way to interrupt and stack unwind a Linux thread and to protect critical sections from such unwinding. There are no technical obstacles to such facilities existing, but clean teardown is often a neglected part of software.
I think it is a “design feature”. In C everything is low level, so I have no expectation of a high level feature like “stop this thread and cleanup the mess” IMHO asking that is similar to asking for GC in C.
Due to task frequently returning to the scheduler the scheduler can do "should stop" check there (also as it might be possible to squeeze it into other atomic state bit maps it might have 0 relevant performance overhead (a single is-bit-set check)). And then properly shut down tasks. Now "properly shut down tasks" isn't as trivial, like the "cleaning up local resources" part normally is, but for graceful shutdown you normally also want to allow cleaning up remote resources, e.g. transaction state. But this comes from the difference of "somewhat forced shutdown" and "grace full shutdown". And in very many cases you want "grace full shutdown" and only if it doesn't work force it. Another reason not to use "naive" forced only shutdown...
Interpreter languages can do something similar in a very transparent manner (if they want to). But run into similar issues wrt. locking and forced unwinding/panics from arbitrary places as C.
Sure a very broken task might block long term. But in that case you often are better of to kill it as part of process termination instead and if that doesn't seem an option for "resilience" reasons than you are already in better use "multiple processes for resilience" (potentially across different servers) territory IMHO.
So as much as forced thread termination looks tempting I found that any time I thought I needed it it was because I did something very wrong else where.
Actually they out date the whole "async" movement or whatever you want to call it.
Also the article is about user-space threads, i.e. OS threads, not kernel-space threads (which use kthread_* not pthread_* and kthreads stopping does work by setting a flag to indicate it's supposed to stop, wakes the thread and then waits for exit. I.e. it works much more close to the `if(stop) exit` example then any signal usage.
I think you have a very strange definition of "user-space", "kernel-space".
kernel space is what runs _in_ the kernel, it doesn't involve pthreads (on any OS) and uses kthreads (on Linux).
POSIX threads are user space threads. It doesn't matter that they are scheduled by the kernel, that is the norm for threads in user space. Also know as OS threads.
What you probably mean with user-space threads are green threads. Which are build on one or more OS threads but have an additional scheduling layer which can schedule multiple green threads on one OS thread using some form of multiplexing scheme.
Does anyone here remember Windows 3.1?
both are forms of cooperative multi tasking/threading
but there are a many differences about what you expect how exactly it is implemented
and most relevantly you run the cooperative tasks on a pool of OS threads which are preempted, so a single thread hanging won't hang your whole application. And dev tooling has gotten much better since then, that helps a lot too.
also Windows 3.1 kinda predates me, so no not really
PS: And JS in the browser still uses cooperative multi tasking and the whole website hanging isn't exactly the norm ;) Partially because you can opt for preempted threads (worker) which also happend to not be good at handling termination. And partially because a lot of iff overhead logic is moved outside of web apps and into the browser itself (e.g. layout, rendering etc.). It's pretty common to setup one such worker and then use message passing to send work to it, but this means that between concurrent tasks there is no preemption and on way to handle arbitrary numbers concurrent requests with a very limited number of workers is to make each worker internally use cooperative multi tasking. So we are kind back at cooperative multi tasking on top of preempted OS threads.
The write-up is on how they're dealing with it starts at https://eissing.org/icing/posts/pthread_cancel/.
Install an empty SIGINT signal handler (without SA_RESTART), then run the loop.
When the thread should stop:
* Set stop flag
* Send a SIGINT to the thread, using pthread_kill or tgkill
* Syscalls will fail with EINTR
* check for EINTR & stop flag , then we know we have to clean up and stop
Of course a lot of code will just retry on EINTR, so that requires having control over all the code that does syscalls, which isn't really feasible when using any libraries.
EDIT: The post describes exactly this method, and what the problem with it is, I just missed it.
That's not to say people do or that it's a good idea to try.
* You control all the IO, and then you can use some cooperative mechanism to signal cancellation to the thread.
* You don't control IO code at the syscall level (e.g. you're using some library that uses sockets under the hood, such as a database client library)... But then it's just obvious you're screwed. If you could somehow terminate the thread abruptly then you'll leak resources (possibly leaving mutexes locked, as you said), or if you interrupt syscalls with an error code then the library won't understand it. That's too trivial to warrant a blog post fussing about signals.
The only useful discussion to have on the topic of thread cancellation is what happens when you can do a cooperative cancel, so I don't think it's fair to shoot that discussion down.
I'm not sure there's any better solution if you are dealing with a library that creates threads and doesn't provide an API to shut them down.
No, thread cancelation cannot happen in arbitrary places. Or doesn't have to.
There are two kinds of cancelation: asynchronous and deferred.
POSIX provides an API to configure this for a thread, dynamically: pthread_setcanceltype.
Furthermore, cancelation can be enabled and disabled also.
Needless to say, a thread would only turn on asynchronous cancelation over some code where it is safe to do so, where it won't be caught in the middle of allocating resources, or manipulating data structures that will be in a bad state, and such.> If we could know that no signal handler is ran between the flag check and the syscall, then we’d be safe.
If you're willing to write assembly, you can accomplish this without rseq. I got it working many years ago on a bunch of platforms. [1] It's similar to what they did in this article: define a "critical region" between the initial flag check and the actual syscall. If the signal happens here, ensure the instruction pointer gets adjusted in such a way that the syscall is bypassed and EINTR returned immediately. But it doesn't need any special kernel support that's Linux-only and didn't exist at the time, just async signal handlers.
(rseq is a very cool facility, btw, just not necessary for this.)
[1] Here's the Linux/x86_64 syscall wrapper: https://github.com/scottlamb/sigsafe/blob/master/src/x86_64-... and the signal handler: https://github.com/scottlamb/sigsafe/blob/master/src/x86_64-...
> No you can't since the compiler will likely inline the syscall (or vsyscall) in your functions.
Do you mean the SYSCALL instruction? The standard practice is to make syscalls through glibc's wrappers. The compiler can't inline stuff across a shared library boundary because it doesn't know what version of the shared library will be requested at runtime. Using alternate non-inlineable wrappers (with some extra EINTR magic) does not newly impose the cost of out-of-lined functions.
It'd be possible to allow this instruction to be inlined into your binary's code (rather than using glibc shared library calls), but basically no one does, because this cost is insignificant compared to the context switch.
In general, inlining can be a big performance win, but mostly not because of the actual cost of the function call itself. It's more that sometimes huge optimizations are possible when the caller and callee are considered together. But these syscall wrappers don't have a lot of expense for the compiler to optimize away.
Do you mean the actual syscall (kernel code)? This is a different binary across a protection boundary; even more reason it can't be inlined.
vsyscall (or its modern equivalent, vDSO) is not relevant here. That's only for certain calls such as `gettimeofday` that do not block and so never return EINTR and (in pthread cancellation terms) are not "cancellation points". There is just no reason to do this for them. And again, the compiler can't inline it, because it doesn't know what code the kernel will supply at runtime.
> The only way is to pay for no-inline cost and have a wrapper that's calling the syscall, so it's a huge cost to pay for a very rare feature (cancelling a thread abruptly is a no-no in most coding conventions).
It's an insignificant cost that you're already paying.
The article is proposing a much safer alternative to cancelling a thread abruptly: using altered syscall wrappers that ensure EINTR is returned if a signal arrives before (even immediately before) entering kernel space. That's the same thing my sigsafe library does.
kill -HUP ?
trying to preemptively terminate a thread in a reliable fashion under linux always seemed like a fool's errand.
fwiw. it's not all that important, they get cleaned up at exit anyway. (and one should not be relying on operating system thread termination facilities for this sort of thing.)
I think a good shorthand for this stuff is
> One can either preemptively or cooperatively schedule threads, and one can also either preemptively or cooperatively cancel processes, but one can only cooperatively cancel threads.
And somehow just a day ago: https://news.ycombinator.com/item?id=45589156
1. Any given thread in an application waits for "events of interest", then performs computations based on those events (= keeps the CPU busy for a while), then goes back to waiting for more events.
2. There are generally two kinds of events: one kind that you can wait for, possibly indefinitely, with ppoll/pselect (those cover signals, file descriptors, and timing), and another kind you can wait for, possibly indefinitely, with pthread_cond_wait (or even pthread_cond_timedwait). pthread_cond_wait cannot be interrupted by signals (by design), and that's a good thing. The first kind is generally used for interacting with the environment through non-blocking syscalls (you can even notice SIGCHLD when a child process exits, and reap it with a WNOHANG waitpid()), while the second kind is used for distributing computation between cores.
3. The two kinds of waits are generally not employed together in any given thread, because while you're blocked on one kind, you cannot wait for the other kind (e.g., while you're blocked in ppoll(), you can't be blocked in pthread_cond_wait()). Put differently, you design your application in the first place such that threads wait like this.
4. The fact that pthread_mutex_lock in particular is not interruptible by signals (by design!) is no problem, because no thread should block on any mutex indefinitely (or more strongly: mutex contention should be low).
5. In a thread that waits for events via ppoll/pselect, use a signal to indicate a need to stop. If the CPU processing done in this kind of thread may take long, break it up into chunks, and check sigpending() every once in a while, during the CPU-intensive computation (or even unblock the signal for the thread every once in a while, to let the signal be delivered -- you can act on that too).
6. In a thread that waits for events via pthread_cond_wait, relax the logical condition "C" that is associated with the condvar to ((C) || stop), where "stop" is a new variable protected by the mutex that is associated with the condvar. If the CPU processing done in this kind of thread may take long, then break it up into chunks, and check "stop" (bracketed by acquiring and releasing the mutex) every once in a while.
7. For interrupting the ppoll/pselect type of thread, send it a signal with pthread_kill (EDIT: or send it a single byte via a pipe that the thread monitors just for this purpose; but then the periodic checking in that thread has to use a nonblocking read or a distinct ppoll, for that pipe). For interrupting the other type of thread, grab the mutex, set "stop", call pthread_cond_signal or pthread_cond_broadcast, then release the mutex.
8. (edited to add:) with both kinds, you can hierarchically reap the stopped threads with pthread_join.