hckrnws
I am excited by the proposal and early work. SBCL Common Lisp is my second most used programming language - glad to see useful extensions. Most of my recent experiments with SBCL involve tooling to be called by LLMs/agents and high speed tooling to provide LLMs/agents with better long term memory and context. Fibers will be useful for most of that work.
what's your first most used programming language?
256Kb stack per Fiber is still insane overhead compared to Actors. I guess if we survey programming community, I’d guesstimate that less than 2% of devs even know what the Actor model is, and an even smaller percentage have actually used it in production.
Any program that has at least one concurrent task that runs on a thread (naturally they’ll be more than one) is a perfect reason to switch to Actor programming model.
Even a simple print() function can see performance boost from running on a 2nd core. There is a lot of backround work to print text (parsing font metrics, indexing screen buffers, preparing scene graphs etc) and its really inefficient to block your main application while doing all this work while background cores sit idle. Yet most programmers dont know about this performance boost. Sad state of our education and the industry.
256k is just's just a placeholder for now. The default will get reduced as we get more experience with the draft implementation. The proposal isn't complete yet.
People fixate on stack size, but memory fragmentation is what bites as fiber counts grow, and actors dodge some of that at the cost of more message-passing overhead plus debugging hell once state gets hairy. Atomics or explicit channels cost cycles that never show up in naive benchmarks. If you need a million concurrent 'things' and they are not basically stateless, you're already in Erlang country, and the rest is wishful thinking.
What is more expensive, copying the message, or memory fencing it, or do you always need both in concurrent actors? Are you saying the message passing overhead is less than the cost of fragmented memory? I wouldn't have expected that.
Usually both, but they show up in different places.
You need synchronization semantics one way or another. Even in actor systems, "send" is not magic. At minimum you need publication of the message into a mailbox with the right visibility guarantees, which means some combination of atomic ops, cache coherence traffic, and scheduler interaction. If the mailbox is cross-thread, fencing or equivalent ordering costs are part of the deal. Copying is a separate question: some systems copy eagerly, some pass pointers to immutable/refcounted data, some do small-object optimization, some rely on per-process heaps so "copy" is also a GC boundary decision.
The reason people tolerate message passing is that the costs are more legible. You pay per message, but you often avoid shared mutable state, lock convoying, and the weird tail latencies that come from many heaps or stacks aging badly under load. Fragmentation is less about one message being cheaper than one fence. It is more that at very high concurrency, memory layout failures become systemic. A benchmark showing cheap fibers on day one is not very informative if the real service runs for weeks and the allocator starts looking like modern art.
So no, I would not claim actor messaging is generally cheaper than fragmented memory in a local micro sense. I am saying it can be cheaper than the whole failure mode of "millions of stateful concurrent entities plus ad hoc sharing plus optimistic benchmarks." Different comparison.
Actors are a model, I have no clue why you're saying that there is a particular memory cost to them on real hardware. To me, you can implement actors using fibers and a postbox.
I've no idea what the majority of programmers know or do not know about, but async logging isn't unknown and is supported by libraries like Log4j.
Yeah that was my also my thought.
I always understood that if you give a thread to each actor you get the "active object" design pattern.
I remember Joe Armstrong saying something like 2kB in his talks, for an Erlang process. That's 1/128 of 256kB.
2KiB is a peculiar size. Typical page size is 4KiB, and you probably want to allocate two pages - one for the stack and one for a guard page for stack overflow protection. That means that a fibers' minimal size ought to be 8KiB.
The stack size is just mmapp-ed address space. It only needs backing memory for the pages actually used by the stack.
Fibers are primarily when you have a problem which is easily expressible as thread-per-unit-of-work, but you want N > large. They can be useful for eg a job system as well, and in that case the primary advantage is the extremely low context switch time, as well as the manual yielding
There are lots of problems where I wouldn't recommend fibers though
I strongly recommend having a look at the mailing list to get some context:
https://sourceforge.net/p/sbcl/mailman/sbcl-devel/thread/CAF...
and
https://sourceforge.net/p/sbcl/mailman/sbcl-devel/thread/CAC...
This will certainly speak to some people taking part in some of the more controversial discussions taking place on HN recently, to put it mildly.
Hmm, must have missed that ; tried to find. There was a SBCL discussion a few days ago but didn't read much controversial things in that? I'm a fanboy though so possibly i'm blind to these things.
Idk if I can quite place it but by the time it gets to, "I've created github issues for each section of your reviews.." in the second link its just so infuriating. Just want to shake them and say "for the love of god just talk to them"!
Is there a similar document for the memory arena feature? I tried searching the official documentation, but found scant references and no instructions on how and when to use it.
Huh, you're right.
Apparently it's still considered experimental (even though Google uses it in production) so it's not in the User Manual. There's this: https://github.com/sbcl/sbcl/blob/master/doc/internals-notes...
I personally like the name fiber better than green threads. But everywhere I’ve worked in user space cooperative threads, it’s always been green threads.
They are different things perhaps? Fibers imply strict cooperative behaviour; I have to explicitly “yield” to give the other fibers a go, green threads are just runtime managed threads?
Green threads are cooperative threads. Preemption requires ability to handle hardware interrupts, which are typically handled by the OS.
What do you mean by this?
I really thought this was gonna be a sick material science paper. Still cool though
SBCL - Steel Bank Common Lisp
They should be called Anthony Green Threads. Seriously though, great to see.
Serious question - I thought LLMs were bad at balancing parentheses?
I had some ideas for extending the lem editor (emacs in common lisp) the other day and I am barely literate in Lisp. So I had Claude Code do it.
Fully awesome. No problems. A few paren issues, buit it seemed to not really struggle really. It produced working code. Was also really good at analyzing the lem codebase as well.
I even had it write an agentic coding tool in Common Lisp using the RLM ideas: https://alexzhang13.github.io/blog/2025/rlm/
Lisp is a natural fit for this kind of thing. And it worked well.
(I also suspect if parens were really a problem... there's room here for MCP or other tooling to help. Basically paredit but for agents)
They are much better these days.
Besides, one can easily code a skill+script for detecting the problem and suggesting fixes. In my anecdotal experience it cuts down the number of times dumber models walk in circle trying to balance parens.
Crafted by Rajat
Source Code