bug with CTFE std.array.array ?

monarch_dodra monarchdodra at gmail.com
Wed Jul 10 22:45:01 PDT 2013


On Thursday, 11 July 2013 at 01:06:12 UTC, Timothee Cour wrote:
> import std.array;
>
> void main(){
>   //enum a1=[1].array;//NG: Error: gc_malloc cannot be 
> interpreted at
> compile time
>   enum a2=" ".array;//OK
>
>   import std.string;
>   //enum a3=" ".splitLines.array;//NG
>   enum a4="".splitLines.array;//OK
>   enum a5=" ".split.array;//OK
>   //enum a6=" a ".split.array;//NG
>   import std.algorithm:filter;
>   enum a7=" a ".split.filter!(a=>true).array;
>   auto a8=" a ".split.array;
>   assert(a8==a7);
>   enum a9=[1].filter!(a=>true).array;//OK
> }
>
>
> I don't understand why the NG above fail (with Error: gc_malloc 
> cannot be
> interpreted at compile time)
>
> furthermore, it seems we can bypass the CT error with 
> interleaving
> filter!(a=>true) (see above), which is even weirder.

Funny, the same question was asked in learn not 3 day's ago.
http://forum.dlang.org/thread/frmaptrpnrgnuvcdfczb@forum.dlang.org
And yeah, it was fixed.
https://github.com/D-Programming-Language/phobos/pull/1305

To answer your question about "filter": filter doesn't have 
length, so instead of taking an efficient code branch in array, 
array simply becomes:
foreach(e;range)
     arr ~= e;
Which is more CTFE friendly than the optimized length 
implementation.


More information about the Digitalmars-d mailing list