skinny delegates

Jonathan Marler johnnymarler at gmail.com
Fri Aug 3 14:46:59 UTC 2018


On Thursday, 2 August 2018 at 17:21:47 UTC, Steven Schveighoffer 
wrote:
> On 8/2/18 12:21 PM, Jonathan Marler wrote:
>> On Monday, 30 July 2018 at 21:02:56 UTC, Steven Schveighoffer 
>> wrote:
>>> Would it be a valid optimization to have D remove the 
>>> requirement for allocation when it can determine that the 
>>> entire data structure of the item in question is an rvalue, 
>>> and would fit into the data pointer part of the delegate?
>>>
>>> Here's what I'm looking at:
>>>
>>> auto foo(int x)
>>> {
>>>    return { return x + 10; };
>>> }
>>>
>>> In this case, D allocates a pointer on the heap to hold "x", 
>>> and then return a delegate which uses the pointer to read x, 
>>> and then return that plus 10.
>>>
>>> However, we could store x itself in the storage of the 
>>> pointer of the delegate. This removes an indirection, and 
>>> also saves the heap allocation.
>>>
>>> Think of it like "automatic functors".
>>>
>>> Does it make sense? Would it be feasible for the language to 
>>> do this? The type system already casts the delegate pointer 
>>> to a void *, so it can't make any assumptions, but this is a 
>>> slight break of the type system.
>>>
>>> The two requirements I can think of are:
>>> 1. The data in question must fit into a word
>>> 2. It must be guaranteed that the data is not going to be 
>>> mutated (either via the function or any other function). 
>>> Maybe it's best to require the state to be const/immutable.
>>>
>>> I've had several cases where I was tempted to not use 
>>> delegates because of the allocation cost, and simply return a 
>>> specialized struct, but it's so annoying to do this compared 
>>> to making a delegate. Plus something like this would be 
>>> seamless with normal delegates as well (in case you do need a 
>>> real delegate).
>>>
>> 
>> I think the number of cases where you could optimize this is 
>> very small.  And the complexity of getting the compiler to 
>> analyze cases to determine when this is possible would be very 
>> large.
>
> It's not that complicated, you just have to analyze how much 
> data is needed from the context inside the delegate. First 
> iteration, all of the data has to be immutable, so it should be 
> relatively straightforward.

After thinking about it more I suppose it wouldn't be that 
complicated to implement.  For delegate literals, you already 
need to gather a list of all the data you need to put on the 
heap, and if it can all fit inside a pointer, then you can just 
put it there instead.

On that note, I think if a developer wants to be sure that this 
optimization occurs in their code, they should explicitly use a 
library solution like the one in Ocean or the one I gave. If a 
developer relies on the optimization, then when it doesn't work 
you won't get any information as to why it couldn't perform the 
optimization (i.e. some data was mutable or were not r-values). 
Depending on the code, this failure will either be ignored or 
break some dependency on the optimization like @nogc.  With a 
library solution, it explicitly copies the data into the pointer 
so you'll get an explicit error message if it doesn't fit or has 
some other issue.

Something else to consider is this would cause some discrepancy 
with the @nogc attribute based on the platform's pointer width. 
By making this an optimization that you don't have to "opt-in", 
the developer may be unaware that their code is depending on this 
optimization that won't work on other platforms.  Their code 
could become platform-dependent without them knowing. However, I 
suppose the counter-argument is that code that uses delegate 
literals with @nogc would probably we aware of this, but still 
something to consider.

In the end, I think that most if not all use cases would be 
better off using the library solution if they want this 
optimization.  This allows the developer to "opt-in" or "opt-out" 
of this optimization and enables the compiler to provide error 
messages when the "opt-in" with incompatible usage.



More information about the Digitalmars-d mailing list