lambda code

Vlad Levenfeld via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Fri Apr 3 19:33:38 PDT 2015


On Thursday, 2 April 2015 at 19:27:21 UTC, John Colvin wrote:
> On Wednesday, 1 April 2015 at 23:29:00 UTC, Vlad Levenfeld 
> wrote:
>> On Tuesday, 31 March 2015 at 13:25:47 UTC, John Colvin wrote:
>>> On Tuesday, 31 March 2015 at 12:49:36 UTC, Vlad Levenfeld 
>>> wrote:
>>>> Is there any way (or could there be any way, in the future) 
>>>> of getting the code from lambda expressions as a string?
>>>>
>>>> I've noticed that if I have an error with a lambda that 
>>>> looks like, say
>>>> x=>x+a
>>>>
>>>> the error message will come up referring to it as
>>>> (x) => x + a
>>>>
>>>> so some level of processing has already been done on the 
>>>> expression. Can I get at any of it during compilation? It 
>>>> would be useful for automatic program rewriting.
>>>
>>> Short answer: no. .codeof for functions is something I've 
>>> wanted for ages, but no movement so far.
>>
>> :(
>
> On a more positive note, there's probably an OK way of achieving
> your particular goal without this. Do you have an example?

Well I was just thinking of turning

   r[].map!(v => v.xy*2).zip (s[]).map!((v,t) => vec2(v.x*cos(t), 
v.y*sin(t))).to_vertex_shader ();

   or something like that, into a shader program.

Right now I have to do it with strings:

   r[].vertex_shader!(`v`, q{
     vec2 u = v.xy*2;
     gl_Position = vec2(v.x*cos(t), v.y*sin(t));
   });

I just keep thinking that, if I have programs composed of 
individual processing stages, like

   auto aspect_ratio_correction (T,U)(T computation, U canvas) {
     return
       zip (computation, repeat (canvas.aspect_ratio, 
computation.length))
         .map!((v, a_r) => v/a_r);
   }

then it's so that I can put them in UFCS chains, so

   vec2[] vertices;
   float time;
   Display display;

   auto kernel = some_program (vertices[], time)
     .aspect_ratio_correction (display);

is able to be run on the cpu or gpu and this decision must be 
made lazily:

   kernel[].array; // cpu
   kernel[].computed_on_gpu.array; // compute on gpu, read back to 
cpu

So I'd like to turn "place of execution" into a lazily evaluated 
range adaptor, and maybe reduce the need to keep different 
cpu/gpu code for the same algorithms. This seems impossible 
without something like .codeof or, better yet, ASTs.

I can already unwrap the type of a composed range to get at how 
its constructed, but I don't get any information on the functions 
that are involved with higher-order function adaptors. The idea 
of doing compile-time restructuring of these ufcs chains is 
interesting to me, but I feel like I only have half of what I 
need to give it a proper try.


More information about the Digitalmars-d-learn mailing list