New programming paradigm

EntangledQuanta via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Thu Sep 7 10:13:43 PDT 2017


On Thursday, 7 September 2017 at 15:36:47 UTC, Jesse Phillips 
wrote:
> On Monday, 4 September 2017 at 03:26:23 UTC, EntangledQuanta 
> wrote:
>> To get a feel for what this new way of dealing with dynamic 
>> types might look like:
>>
>> void foo(var y) { writeln(y); }
>>
>> var x = "3"; // or possibly var!(string, int) for the explicit 
>> types used
>> foo(x);
>> x = 3;
>> foo(x);
>>
>> (just pseudo code, don't take the syntax literally, that is 
>> not what is important)
>>
>> While this example is trivial, the thing to note is that there 
>> is one foo declared, but two created at runtime. One for 
>> string and one for and int. It is like a variant, yet we don't 
>> have to do any testing. It is very similar to `dynamic` in C#, 
>> but better since actually can "know" the type at compile time, 
>> so to speak. It's not that we actually know, but that we write 
>> code as if we knew.. it's treated as if it's statically typed.
>
> It is an interesting thought but I'm not sure of its utility. 
> First let me describe how I had to go about thinking of what 
> this means. Today I think it would be possible for a given 
> function 'call()' to write this:
>
>     alias var = Algebraic!(double, string);
>
>     void foo(var y) {
>         mixin(call!writeln(y));
>     }
>
> Again the implementation of call() is yet to exist but likely 
> uses many of the techniques you describe and use.
>
> Where I'm questioning the utility, and I haven't used C#'s 
> dynamic much, is with the frequency I'm manipulating arbitrary 
> data the same, that is to say:
>
>     auto m = var(4);
>     mixin(call!find(m, "hello"));
>
> This would have to throw a runtime exception, that is to say, 
> in order to use the type value I need to know its type.

All types have a type ;) You specified in the above case that m 
is an int by setting it to 4(I assume that is what var(4) means). 
But the downside, at least on some level, all the usable types 
must be know or the switch cannot be generated(there is the 
default case which might be able to solve the unknown type 
problem in some way).

> A couple of additional thoughts:
>
> The call() function could do something similar to pattern 
> matching but args could be confusing:
>
>     mixin(call!(find, round)(m, "hello"));
>
> But I feel that would just get confusing. The call() function 
> could still be useful even when needing to check the type to 
> know what operations to do.
>
>     if(m.type == string)
>         mixin(call!find(m, "hello"));
>
> instead of:
>     if(m.type == string)
>         m.get!string.find("hello");

The whole point is to avoid those checks as much as possible. 
With the typical library solution using variant, the checks are 
100% necessary. With the solution I'm proposing, the compiler 
generates the checks behind the scenes and calls the template 
that corresponds to the check. This is the main difference. We 
can use a single template that the switch directs all checks to. 
But since the template is compile time, we only need one, and we 
can treat it like any other compile time template(that is the 
main key here, we are leveraging D's template's to deal with the 
runtime complexity).

See my reply to Biotronic with the examples I gave as they should 
be more clear.

The usefulness of such things are as useful as they are. Hard to 
tell without the actual ability to use them. The code I created 
in the other thread was useful to me as it allowed me to handle a 
variant type that was beyond my control(given to me by an 
external library) in a nice and simple way using a template. 
Since all the types were confluent(integral values), I could use 
a single template without any type dispatching... so it worked 
out well.

e.g., Take com's variant. If you are doing com programming, 
you'll have to deal with it. The only way is a large switch 
statement. You can't get around that. Even with this method it 
will still require approximately the same checking because most 
of the types are not confluent. So, in these cases all the method 
does is push the "switch" in to the template. BUT it still turns 
it in to a compile time test(since the runtime test was done in 
the switch). Instead of one large switch one can do it in 
templates(and specialize where necessary) which, IMO, looks nicer 
with the added benefit of more control and more inline with how D 
works.

Also, most of the work is simply at the "end" point. If, say, all 
of phobos was rewritten to us these variants instead of runtime 
types, then a normal program would have to deal very little with 
any type checking. The downside would be an explosion in size and 
decrease in performance(possibly mitigated to some degree but 
still large).

So, it's not a panacea, but nothing is. I see it as more of a 
bridge between runtime and compile time that helps in certain 
cases quite well. e.g., Having to write a switch statement for 
all possible types a variable could have. With the mxin, or a 
comiler solution, this is reduced to virtually nothing in many 
cases and ends up just looking like normal D template code. 
Remember, a template is actually N different normal functions so 
they are quite useful for collapsing code down by large factors, 
which is why they are so useful.



More information about the Digitalmars-d-learn mailing list