Reimplementing the bulk of std.meta iteratively

Andrei Alexandrescu SeeWebsiteForEmail at erdani.org
Tue Sep 29 03:14:34 UTC 2020


On 9/28/20 6:46 PM, Bruce Carneal wrote:
> 
> When you leave the type system behind, when you reify, you must assume 
> responsibility for constraints that were previously, and seamlessly, 
> taken care of by the type system.  The drudgery, the friction, follows 
> directly from the decision to escape from the type system (reify) rather 
> than remain within it (type functions).
> 
> I think of it as being similar to CT functions vs templates. Within CT 
> functions you've got the type system on your side. Everybody loves CT 
> functions because everything "just works" as you'd expect.  Near zero 
> additional semantic load.
> Wonderfully boring.
> 
> Within templates, on the other hand, you'd better get your big-boy 
> britches on because it's pretty much all up to you pardner!  (manually 
> inserted constraints, serious tension between generality and 
> debugability, composition difficulties, lazy/latent bugs in the general 
> forms, localization difficulties, ...)
> 
> If language additions like type functions are off the table, then we're 
> left with LDMs and you've produced what looks like a dandy in 
> reify/dereify.  If we have to step outside the language, if language 
> additions are just not in the cards any more, then something like 
> reify/dereify will be the way to go.
> 
> I hope that we've not hit that wall just yet but even if we have D will 
> remain, in my opinion, head and shoulders above anything else out 
> there.  It is a truly wonderful language.  I am very grateful for the 
> work you, Walter, and many many others have put in to make it so.  (most 
> recent standout, Mathias!  I'm a -preview=in fan)
> 
> Finally, I'd love to hear your comments on type functions vs 
> reify/dereify.  It's certainly possible that I've missed something.  
> Maybe it's a type functions+ solution we should be seeking (type 
> functions for everything they can do, and some LDM for anything beyond 
> their capability).

Years ago, I was in a panel with Simon Peyton-Jones, the creator of 
Haskell. Nicest guy around, not to mention an amazing researcher. This 
question was asked: "What is the most important principle of programming 
language design?"

His answer was so powerful, it made me literally forget everybody else's 
answer, including my own. (And there were no slouches in that panel, 
save for me: Martin Odersky, Guido van Rossum, Bertrand Meyer.) He said 
the following:

"The most important principle in a language design is to define a small 
core of essential primitives. All necessary syntactic sugar lowers to 
core constructs. Do everything else in libraries."

(He didn't use the actual term "lowering", which is familiar to our 
community, but rather something equivalent such as "reduces".)

That kind of killed the panel, in a good way. Because most other 
questions on programming language design and implementation simply made 
his point shine quietly. Oh yes, if you had a small core and build on it 
you could do this easily. And that. And the other. In a wonderfully meta 
way, all questions got instantly lowered to simpler versions of 
themselves. I will never forget that experience.

D breaks that principle in several places. It has had a cavalier 
attitude to using magic tricks in the compiler to get things going, at 
the expense of fuzzy corners and odd limit cases. Look at hashtables. 
Nobody can create an equivalent user-defined type, and worse, nobody 
knows exactly why. (I recall vaguely it has something to do, among many 
other things, with qualified keys that are statically-sized arrays. 
Hacks in the compiler make those work, but D's own type system rejects 
the equivalent code. So quite literally D's type system cannot verify 
its own capabilities.)

Or take implicit conversions. They aren't fully documented, and the only 
way to figure things out is to read most of the 7193 lines of 
https://github.com/dlang/dmd/blob/master/src/dmd/mtype.d. That's not a 
small core with a little sugar on top.

Or take the foreach statement. Through painful trial and error, somebody 
figured out all possible shapes of foreach, and defined `each` to 
support most:

https://github.com/dlang/phobos/blob/master/std/algorithm/iteration.d#L904

What should have been a simple forwarding problem took 190 lines that 
could be characterized as very involved. And mind you, it doesn't 
capture all cases because per 
https://github.com/dlang/phobos/blob/master/std/algorithm/iteration.d#L1073:

// opApply with >2 parameters. count the delegate args.
// only works if it is not templated (otherwise we cannot count the args)

I know that stuff (which will probably end on my forehead) because I 
went and attempted to improve things a bit in 
https://github.com/dlang/phobos/pull/7638/files#diff-0d6463fc6f41c5fb774300832e3135f5R805, 
which attempts to simplify matters by reducing the foreach cases to 
seven shapes.

To paraphrase Alan Perlis: "If your foreach has seven possible shapes, 
you probably missed some."

Over time, things did get better. We became adept of things such as 
lowering, and we require precision in DIPs.

I very strongly believe that this complexity, if unchecked, will kill 
the D language. It will sink under its own weight. We need a precise, 
clear definition of the language, we need a principled approach to 
defining and extending features. The only right way to extend D at this 
point is to remove the odd failure modes created by said cavalier 
approach to doing things. Why can't I access typeid during compilation, 
when I can access other pointers? Turns out typeid is a palimpsest that 
has been written over so many times, even Walter cannot understand it. 
He told me plainly to forget about trying to use typeid and write 
equivalent code from scratch instead. That code has achieved lifetime 
employment.

And no compiler tricks. I am very, very opposed to Walter's penchant to 
reach into his bag of tricks whenever a difficult problem comes about. 
Stop messing with the language! My discussions with him on such matters 
are like trying to talk a gambler out of the casino.

That is a long way to say I am overwhelmingly in favor of in-language 
solutions as opposed to yet another addition to the language. To me, if 
I have an in-language solution, it's game, set, and match. No need for 
language changes, thank you very much. An in-language solution doesn't 
only mean no addition is needed, but more importantly it means that the 
language has sufficient power to offer D coders means to solve that and 
many other problems.

I almost skipped a heartbeat when Adam mentioned - mid-sentence! - just 
yet another little language addition to go with that DIP for 
interpolated strings. It would make things nicer. You know what? I think 
I'd rather live without that DIP.

So I'm interested in exploring reification mainly - overwhelmingly - 
because it already works in the existing language. To the extent it has 
difficulties, it's because of undue limitations in CTFE and 
introspection. And these are problems we must fix anyway. So it's all a 
virtuous circle. Bringing myself to write `reify` and `dereify` around 
my types is a very small price to pay for all that.


More information about the Digitalmars-d mailing list