the best language I have ever met(?)

Artur Skawina via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Fri Nov 25 15:43:04 PST 2016


On 11/25/16 18:33, Jonathan M Davis via Digitalmars-d-learn wrote:
> On Friday, November 25, 2016 18:20:11 Artur Skawina via Digitalmars-d-learn 
> wrote:
>> On 11/25/16 17:30, Jonathan M Davis via Digitalmars-d-learn wrote:
>>> On Friday, November 25, 2016 17:03:32 Artur Skawina via
>>>>    enum T[N] staticArray(T, alias ELS, size_t N=ELS.length) = ELS;
>>>>    auto arr = staticArray!(ubyte, [1, 2, 3, 4]);
>>>
>>> That won't work with variables. e.g.
>>>
>>> ubyte a;
>>> auto arr = staticArray!(ubyte, [1, 2, 3, 4, a]);
>>>
>>> would fail to compile. It only works when all of the values are known at
>>> compile time, whereas
>>>
>>> ubyte a;
>>> ubyte[5] arr = [1, 2, 3, 4, a];
>>>
>>> would compile just fine.
>>
>> Now you're trying to change the scope. Of course this is a hack,
>> that's only useful in certain contexts, such as initializing static
>> arrays with known values, which this subthread is about.
> 
> How is it changing the scope? What has been asked for on several occasions -
> and what int[$] was supposed to fix - was the ability to intialize a static
> array while inferring its size.

It's a known language limitation, which can be worked around using
hacks such as the one I showed, that help in the very common cases
that appeared in this thread, add zero RT costs and work with VRP.
I didn't realize you were suggesting to use the function helper
route for the general case - no scope change, sorry.

> ubyte a;
> ubyte[5] arr = [1, 2, 3, 4, a];
> 
> is a perfectly legitimate example of initializing a static array, and
> there's no reason why it shouldn't be a goal to have it work with a function
> that infers the size of the static array.

The problem with such a function is that, just like every other function,
it's type can not depend on RT data, and that `typeof([1,a])` is `int[]`.
The information is lost at the function boundary. So the possible
improvements are a) changing the array literal semantics, and b)
improving IFTI. 

> We'd actually have it right now with
> 
> T[n] staticArray(T, size_t n)(auto ref T[n] arr)
> {
>     return arr;
> }
> 
> except that VRP only works right now if no inferrence is done when
> instantiating the template.
> 
> auto sa = staticArray!(ubyte, 4)([1, 2, 3, 4]);
> 
> compiles just fine, but that obviously defeats the purpose of the template.
> If the compiler is improved so that
> 
> auto sa = staticArray!ubyte([1, 2, 3, 4]);
> 
> also works with VRP, then everything works just like it would with
> 
> ubyte a;
> ubyte[5] arr = [1, 2, 3, 4, a];
> 
> except that with the function, the size would be inferred.
> 
> ubyte a;
> auto arr = staticArray!ubyte([1, 2, 3, 4, a]);

IOW you want to improve IFTI, so that `n` is inferred from the
length of the passed argument. That would indeed work for array
literals and CTFE-able expressions. Any improvement to IFTI is a
good thing, but the RT cost of this helper could be high if it ever
doesn't get inlined and completely optimized away.
If the cost isn't an issue and a different syntax is acceptable
then this should already work:

   template staticArray(T, E...) {
      T[E.length] staticArray() @property { return [E]; }
   }
   template staticArray(E...) {
      typeof([E][0])[E.length] staticArray() @property { return [E]; }
   }

   ubyte a;
   auto sa = staticArray!(ubyte, 1, 2, 3, 4, a);
   auto sb = staticArray!(1, 2, 3, 4, a);

artur


More information about the Digitalmars-d-learn mailing list