D Lang and Vibe Coding
Derrick J
microbrainz at gmail.com
Fri Oct 31 14:37:57 UTC 2025
On Friday, 31 October 2025 at 08:12:47 UTC, GrimMaple wrote:
> On Friday, 31 October 2025 at 04:57:08 UTC, Derrick J wrote:
> ages it has spawned.
>> I see that you are listed on the OpenD GitHub repo. It looks
>> like you are nearly a year in forking the code and making a
>> new standard library. It doesn't look like there has been much
>> progress with it. It would seem to me that you are more upset
>> that things aren't going the way that you would like because
>> having to write it yourself is a lot more work than just
>> complaining. The person who pays the bills, makes the rules.
>> Whatever he's doing seems to be working, we're here aren't we!?
>
> What I don't understand is why people are so reluctant on
> seeing the point and just stick their head
> in the sand and go "no you're wrong we're fine".
>
> OpenD wasn't really made to bring in as many new features as
> possible;
> in fact, it had quite an opposite goal - to bring in
> _stability_ to the
> lang, because at the time every new DMD release would break or
> deprecate
> something that was in the wild. Mind you, at the time releases
> were much more
> common too; having a new release almost every month compared to
> just 2 (!)
> releases this year. (Btw, still not seeing anything wrong with
> that? :)
>
> Regarding your original post,
>
>> Using a few different AI coding tools
>
> I tried vibecoding D as well, but it yielded poor results. I
> use LLMs for
> work (C++, C#, python) almost every day, and there are no
> issues. I think
> it would be possible to circumvent the wrong sin tax and
> hallucinations -
> at some point the LLM was trying to convince me that std.zip
> doesn't exist,
> and when I pointed out that it does - it just started making up
> stuff. But
> I'd much rather just use a lang that Works (tm), instead of
> having to endlessly
> fight a loosing battle against the compiler and tooling around
> it.
>
> Honestly, the whole D experience is like this - trying to
> circumvent stuff,
> while in other langs things Just Work (tm). In retro spec, I
> think D would
> have benefited much more from a working auto-completion server
> than from
> Import C or the dreaded borrow checker.
The unlock for me was realizing that the coding agents really
seem to work well when they have documents that they can follow,
detailing step by step what they need to accomplish with spaces
to check of their work and very detailed directions what success
looks like and how to debug and troubleshoot. I came across a
youtube channel that showed have they develop full applications
with AI and that was the process. I had developed a web app in
Firebase Studio, mainly because it was free when you used a free
tier Gemini API key. I got down to a pretty sophisticated system
and after digging into the cost structure, I really didn't like
the limited control and one demand pricing structure if I went
over their free allowance. So I took that entire chat history and
had Gemini create a PRD and TODO documents detailing step by step
what to implement. I have had some ups and downs along the way,
but when the backend Vibe.d program compiled and the front end
was able to query it and retrieve and store data I knew that I
was onto something.
I use VScode and use the DLang extensions that runs a full syntax
checker. I have it utilize DFMT in order to make sure that the
source files are formatted properly, and it writes unit tests to
verify expected functionality. There were times where they failed
and it had to correct the code to get it to pass. So far I have
trialed Windsurf, Zencode, Gemini Coding Assistant, and Github
Copilot. Each has certain strengths and weaknesses. Goodle Gemini
Code CLI works really well, but consumes tokens pretty quickly,
but the free key can get you pretty far. Github CoPilot has a
pretty generous "Premium" AI allotment with several models to
switch between. Claude Code works pretty good, GPT 4.1 is not
great, but the newest models work pretty well. After I ran out of
premium AI tokens, I find that GPT 5 Mini works really well and
has unlimited tokens. I haven't used Claude Code CLI but the
videos I have seen, it is very feature rich, including the
ability to call other AI models via cli tools. The models are
only part of mix, the extension or client that hosts the MCP
servers is an important element as to how well it will be able to
read and write files locally, retrieve webpages or run search
queries if it needs help resolving an issue, which I have found
that they can find what they need from the DLANG website and
generally resolve their errors.
Like any tool it is how you wield the tool that makes it
effective or not. I was kind of down on it too as I just lacked
the creativity to specify what I wanted. That was until I
realized that I could use a chatbot to create those files for me
based off what I described, it could then fill in the blanks to
go from idea to implementation. Now I am just making a web app. I
don't know how well it is going to code something low level like
a compression or crytography algorithm. I can't imagine it will
work very well, even with explicit step by step directions, but
maybe it would surprise us using a multi-model approach that used
the models best for the task on same project. Just my 2 cents
anyway, I'm not trying to push anything, just sharing what I've
learned.
I've read in other posts how D lang can be interfaces with
llama.c which interesting and exciting. You can build an app with
local intelligence built in. Something that I want to play around
with at some point. Maybe make my app be able to make suggestions
to the user or something. There are some very small models that
can easily run on a standard system with good performance and
decent enough intelligence.
More information about the Digitalmars-d
mailing list