How people are using LLM's with D
Richard Andrew Cattermole (Rikki)
richard at cattermole.co.nz
Wed Feb 11 01:10:18 UTC 2026
I'm going to split this off into its own thread, so anyone who
has any tips or approaches can teach:
On 11/02/2026 6:38 AM, matheus wrote:
> On Tuesday, 10 February 2026 at 16:14:03 UTC, Vladimir
> Panteleev wrote:
>
> On Monday, 9 February 2026 at 21:25:02 UTC, user1234 wrote:
>
> One tendency I have noticed recently in the D world is
> one guy
> that is very good with AI. Cybershadow. Already 5 or 6
> PR, he
> masters the tools.
>
> I guess I could post a few thoughts about AI / LLMs here if
> people
> are interested...
>
> Interesting, since I'm not using AI I'd like to know, in this
> case you have LLM locally and you point to D source folder and
> It learns from that database and do everything from there?
>
> I think this would be a nice topic/video to be made so to
> attract people, since D suffers from content lately. And show
> how do you're doing PR currently, maybe even guys like me who
> will go deep to help would try it.
To start this off, I'll include Vladimir's reply, and then do
mine as a reply.
Original:
https://forum.dlang.org/post/htgjeqlvwzqwosecdqmz@forum.dlang.org
On 11/02/2026 6:54 AM, Vladimir Panteleev wrote:
> The main way I use LLMs is with Claude Code. Here's how it
> works:
>
> 1. You open the directory with your project in a terminal
> 2. You run `claude`
> 3. This opens a TUI that looks like a chat interface. You type
> your question or request for what you want the bot to do.
> 4. The bot looks at your code. If it's too big to fit into its
> context (a limited window of how much it can see at a time), it
> will search just for relevant bits.
> 5. If the problem is big, it will first write a plan for how it
> aims to accomplish its goal, for you to read and approve.
> 6. It does the thing. It can edit files and run commands in
> order to run your test suite (or at least check that the code
> compiles). By default it will ask before every edit or command.
> Many people run it in a sandbox and disable the prompts, so
> that it can work by itself but still doesn't accidentally
> delete your entire computer.
> 7. Sometimes the bot can automatically write down what it
> learned in a memory file. It will read this file automatically
> the next time you ask it to do something in that project. There
> isn't really a lot of "learning" other than something like this.
>
> Before/aside that, I also have a spare GPU which I'm using to
> run an autocomplete model. It's nice when writing code by hand.
> For that I use https://github.com/CyberShadow/company-llama +
> llama.cpp.
More information about the Digitalmars-d
mailing list