Thoughts about "Compile-time types" talk

Ola Fosheim Grøstad ola.fosheim.grostad at gmail.com
Fri May 17 13:52:26 UTC 2019


On Friday, 17 May 2019 at 10:51:48 UTC, Alex wrote:
> You'll have to wait a few millennia ;/

Not necessarily. A dynamic language like Python is quite close in 
some respects, except you usually don't want to be bogged down 
with types when doing practical programming in Python.

Anyway, if you want to design such a language the best way to go 
would probably be to first build an interpreted language that has 
the desired semantics. Then try to envision how that could be 
turned into a compiled language and do several  versions of 
alternative designs.

> Socrates would be amazing at what we have just as you will be 
> amazing at what the future brings(if humans don't destroy 
> themselves in the process).

Yes, there is that caveat.

> With quantum computing, if it is in the right direction, a new 
> programming paradigm and languages will need to be laid to make 
> it practical.

I don't think quantum computing is a prerequisite. What I believe 
will happen is that "machine learning" will fuel the making of 
new hardware with less "programmer control" and much more 
distributed architectures for computation. So, when that new 
architecture becomes a commodity then we'll se a demand for new 
languages too.

But the market is too smal in the foreseeable future, so at least 
for the next decades we will just get more of the same good old 
IBM PC like hardware design (evolved, sure, but there are some 
severe limitations in step-by-step backwards compatibility).

I guess there will be a market shift if/when robots become 
household items. Lawnmowers is a beginning, I guess.

> That is a microcosm of what is happening on the large scale.

I don't think it is happening yet though. The biggest change is 
in the commercial usage of machine learning. But I think 
contemporary applied machine learning is still at a very basic 
level, but the ability to increase the scale has made it much 
more useful.

> Compilers are still in the primitive stage on the higher level 
> just as your programming knowledge is primitive on a higher 
> level(and very advanced on a lower level).

Yes, individual languages are very primitive. However, if you 
look at the systems built on top of them, then there is some 
level of sophistication.

> difficult to use for complexity. The only languages I see that 
> handle complexity on any meaningful level are functional 
> programming languages.

Logic programming languages, but they are difficult to utilize 
outside narrow domains. Although I believe metaprogramming for 
the type system would be better done with a logic PL. How to make 
it accessible is a real issue, though.

Another class would be languages with builtin proof systems.

> Procedural languages actually seem to create an upper limit 
> where it becomes exponentially harder to do anything past a 
> certain amount of complexity.

You can do functional programming in imperative languages too, it 
is just that you tend not to do it. Anyway, there are mixed 
languages.

> The problem with functional programming languages is they are 
> difficult to use for simple stuff and since all programs start 
> out as simple, it becomes very hard to find a happy medium.

Well, I don't know. I think the main issue is that all 
programming languages lacks the ability to create a visual 
expression that makes the code easy to reason about. Basically, 
the code looks visually too uniform and similar and we have to 
struggle to read meaning into the code.

So as a result we need to have the model for large sections of 
the program in our head. Which is hard. So basically a language 
should be designed together with an accompanying editor with some 
visual modelling capabilities, but we don't know how to do that 
well… We just know how to do it in a "better than nothing" 
fashion.

> Unfortunately D+Haskell would be an entirely new language.

I think you would find it hard to bring those two together anyway.

The aims were quite different in the design. IIRC Haskell was 
designed to be a usable vehicle for research so that FP research 
teams could have some synergies from working on the same language 
model.

D was designed in a more immediate fashion, first as a clean up 
of C++, then as a series of extensions based on perceived user 
demand.

> thing. Same goes for programming. Procedural and functional are 
> just two different extremes and making the distinction is 
> actually limiting in the long run.

Well, I'm not sure they are extremes, you are more constrained 
with a FP, but that also brings you coherency and less state to 
consider when reasoning about the program.

Interestingly a logic programming language could be viewed as a 
generalization of a functional programming language.

But I'll have to admit that there are languages that sort of 
makes for a completely different approach to practical 
programming, like Erlang or Idris.

But then you have research languages that try to be more regular 
interpretative, while still having it as a goal to provide a 
prover, like Whiley and some languages built by people at 
Microsoft that is related to Z3. These are only suitable for toy 
programs at this point, though.

> The goal with such things is choosing the right view for the 
> right problem at the right time and then one can generate the 
> solution very easily.

Yes. However, it takes time to learn a completely different tool. 
If you know C# then you can easily pick up Java and vice versa, 
but there is no easy path when moving from C++ to Haskell.



More information about the Digitalmars-d mailing list