Microsoft chose Go instead of C# or Rust to rewrite TypeScript
jmh530
john.michael.hall at gmail.com
Mon Oct 6 13:54:12 UTC 2025
On Monday, 6 October 2025 at 11:48:57 UTC, Dennis wrote:
> [snip]
>
> Sometimes people ask Walter for tasks to work on. After hearing
> the tasks he gives, the enthusiasm quickly dies down. For
> example, apparently no one wants to write an [80 bit floating
> point
> emulator](https://forum.dlang.org/post/stnisj$12gu$1@digitalmars.com).
>
To be fair, not many people could write an 80bit floating point
emulator. When people are complaining about stuff over and over,
it's usually not low-hanging fruit that someone could figure out
how to do in an hour. It comes back to the number of people in
the community with both the skills to do these things and the
motivation to do them. That's a hard problem when it's a
voluntary project.
I know there was a big back and forth a few months ago about
using AI in the D code base. Rather than re-litigate that, I'm
curious if you are aware of Retrieval-augmented generation (RAG)
[1], or know if any of the other major D developers are aware of
it. I have heard of some people working on one for a less
well-known probabilistic programming language that I follow
(Stan).
This is basically a technique where you can take an off-the-shelf
AI and provide it with some information that provides some
additional context. The idea would be that off-the-shelf AIs may
hallucinate more for D because it is a less well-known language.
But if you feed in additional context, then you may get better
results.
OpenAI has a pretty easy way to make Custom GPTs and add
knowledge to it. So for instance, we could load in the D language
specification and docs. We could load in the style guide. Maybe
load in Ali's book. It might take a little coding, but there
would probably be a way to load in the DMD code base itself. The
limitation here is that you only load in 20 files up to 512MB and
I don't think zip files work, so you would need to convert
multiple files into one large text file. The end goal would be a
custom GPT that is more specific to the D language.
So for instance, suppose we create a custom D GPT, then feed in
the standard for 80bit floating point (maybe also an example of
another type of emulator), then ask the custom GPT to write an
emulator for it. See what happens. At least with a project like
this we can create unit tests to compare the behavior of the
hardware vs the emulator. That makes it a good test case. And if
the concern is something being pure AI, then a human with more
experience could take this as a starting point to clean up into a
final version. I think it would be less work than starting from
scratch.
And there might be other problems where this approach might
provide a good starting point. Like how no one wants to be
getting debuggers to work better.
I'm not saying that AI is a panacea, but it keeps getting better.
I've had better success with ChatGPT5 than prior versions. It
still hallucinates and gets stuff wrong, but the rate is lower
than before. A custom GPT for D might help reduce hallucinations
and incorrect responses further for D code. it could help
kickstart some of these more difficult projects or otherwise
reduce the burden on experienced D developers. Or otherwise, it
could help people new to D come to the language, like how
run.dlang.org can make it easy to get started with D (heck I bet
you could combine the two, talk to the GPT in natural language to
get it to write the code, then put it into run.dlang.org).
[1] https://en.wikipedia.org/wiki/Retrieval-augmented_generation
More information about the Digitalmars-d-learn
mailing list