Second Draft: Coroutines

Mai Lapyst mai at lapyst.by
Mon Feb 3 21:43:09 UTC 2025


On Tuesday, 28 January 2025 at 16:31:35 UTC, Jin wrote:
> This is not the first day in programming and, unfortunately, I 
> know perfectly well how the Overton window works:

Sure, but you forget a curcial detail in your analysis: Humans 
and Intention. Other projects (maybe with coporate funding behind 
them) will indeed throw usability under the bus for some sweet 
sweet money (i.e. Blockchain / AI), but Dlang is an community 
effort, entirely driven and held up by humans that put their 
heart into it. Throwing both into the same bin and makeing 
conclusions about them isn't fair game.

While the possibility that the same happens to Dlang, it will 
only because theres no one currently working on these features, 
and thats only because theres generally to few contributors which 
inturn is an effect bc theres hardly any help for onbording / 
mentoring an new contributor onto the project, which also comes 
from the lack of people wanting to put effort into the language. 
But thats all a management- & reputation issue of the project, 
not an ill-intend or genral lack of empaty towards people that 
prefer to work at a lower level (i.e. golang, which dosnt even 
give you access to their greenthreads implemntation for you to 
tweak!).

> Asynchronous function can not be inlined at the place of use 
> and use a faster allocation on the stack.

First: they are just **functions**, ofc they will use stack 
allocation whenever possible, just like normal functions. The 
only difference is the state machine thats wrapped around it. Any 
value that needs to survive into another state will put outside 
of the stack. But it still dosnt say **how** this memory gets 
allocated, as this is the responsibility of the executor driving 
the statemachine! So you can perfectly fine allocate it onto the 
stack as well, removing any "perfomance issue" that might arise. 
Only downside then is that your statemachine is somewhat useless, 
but that would be as well if you did it by hand, so it's then 
more a design problem of the executor than the technique.

> Benchmark only shows that even the coolest modern JIT compilers 
> are not able to optimize them.

Because you once again try to compare apples with oranges. Ofc 
will be linear code with "normal" functions be way more 
performant if you just look at the asm generated by them and draw 
your conclusion. But once again: async is for handling waiting 
for states which you **dont** know when they will be ready, such 
as damn IO! You just can't predict when your harddrive / kernel 
will answer the request for more data, you can just wait until it 
says so! Thats why blocking IO fell out of favor: it just stalls 
your programm and you cant do anything else. How did we solve 
that? Right, by introducing **parallelism** via threads with is 
just doing code asyncronous to each other!!! But it was slow bc 
of kernel context switches, the solution? Move it to userspace, 
aka fibers / greenthreading / lightwightthreads! Same technique, 
other place; still the same idea of parallelism by executing code 
**seemingly** asyncronous to each other. Async functions / state 
machienes are just the next evolution of that, just like we 
thought one day that `goto` for simple branching would be to 
cumbersome  to write + it can go a lot wrong, so we created `if X 
... else ...`, `for X ...`, `while X ...` and so forth!

> Where one single fiber will have a dozen fast calls and the 
> optional one yield somewhere in the depths, with async 
> functions you will have a dozen slow asynchronous calls, even 
> if no asynchrony (for example, because caching) will not be 
> required.

Sure, but you again dont compare them fairly. An async call (with 
use of `await`) is just like calling `Fiber.yield()`! So to 
compare them on a same level, you would need to yield in every 
call, makeing my analogy to an own thread per `fib(n)` call 
understandable. So yes, ofc will async functions for a 
**non-async task** be wasetfull, but so will be using threads for 
the same thing! At the end, it's not the techniques fault if an 
programmer just uses it wrong; `fib(n)` shouldn't be parallel (in 
any form!) to begin with.

> Any useful code is not pure.

Fib is pure. Addition is pure. Any arithmetic is pure. Modifing 
an object via an method that only changes fields is pure. I would 
argue they are usefull, unless ofc you thing that **any code 
anywhere** is a waste of time, then we can stop the whole thing 
right here. I'm not saying that we'll all only should do 
functional programming (I also dont like overuse of it!), but 
considering what it **teaches** you, isn't a bad thing; like 
pureness (which dlang has itself!) and effects. Just go ahead: 
grab any code and tell dlang it should output you the processed 
dlang code (i.e on run.dlang.io the "AST" button), you'll see 
quickly that a ton of functions are actually marked pure by the 
compiler while containing sensible and usefull code!

> Global (or rather Thread/Fiber Local) variables are used for 
> special programming techniques that allow you to write a 
> simpler, more reliable and effective code.

It's optimization. Just like makeing a feature to automatically 
creates & optimize state machienes is. Once again: if you're fine 
with writing functions by hands, no one is stopping you. Fibers 
in dlang are an entirely **library** driven construct. You can 
just rip them out of phobos and maintain an own version. Nothing 
prevents you from that!



More information about the dip.development mailing list