Use of IA for PR - my POV
Vladimir Panteleev
thecybershadow.lists at gmail.com
Tue Feb 10 17:54:29 UTC 2026
On Tuesday, 10 February 2026 at 17:38:40 UTC, matheus wrote:
> Interesting, since I'm not using AI I'd like to know, in this
> case you have LLM locally and you point to D source folder and
> It learns from that database and do everything from there?
The main way I use LLMs is with Claude Code. Here's how it works:
1. You open the directory with your project in a terminal
2. You run `claude`
3. This opens a TUI that looks like a chat interface. You type
your question or request for what you want the bot to do.
4. The bot looks at your code. If it's too big to fit into its
context (a limited window of how much it can see at a time), it
will search just for relevant bits.
5. If the problem is big, it will first write a plan for how it
aims to accomplish its goal, for you to read and approve.
6. It does the thing. It can edit files and run commands in order
to run your test suite (or at least check that the code
compiles). By default it will ask before every edit or command.
Many people run it in a sandbox and disable the prompts, so that
it can work by itself but still doesn't accidentally delete your
entire computer.
7. Sometimes the bot can automatically write down what it learned
in a memory file. It will read this file automatically the next
time you ask it to do something in that project. There isn't
really a lot of "learning" other than something like this.
Before/aside that, I also have a spare GPU which I'm using to run
an autocomplete model. It's nice when writing code by hand. For
that I use https://github.com/CyberShadow/company-llama +
llama.cpp.
> So I wonder if usually programming languages have restrictions
> to ensure bad code don't mess with anything, but on the other
> hand AI keep getting better and learns how to avoid bad code,
> what's the point of having all these languages? Or in fact,
> could AI write a better programming language by itself?
Agentic coding actually works better the stricter the language!
This is because then the compiler can check if the code is
correct immediately, and if it's not, the agent sees the error
right away and can fix it before stopping. So, I think we will
see more strictly typed languages or languages with built-in
theorem proving become more popular. There are often too
frustrating or time consuming for humans to use for every day
programming, but it doesn't matter when the code is being written
by AI.
I am seeing this too with testing and Nix. Writing integration
tests with Nix is usually a lot of work, but once it's written
then it's rock-solid proof that your thing works and that
everyone can verify that it works. You can even script entire VMs
that can run any software for integration tests, and these VM
tests run without any problems on any Linux machine including
GitHub Actions. So, I've since been adding Nix based integration
tests to all my projects (including this forum, which now has
Nix/Playwright based end-to-end tests).
More information about the Digitalmars-d
mailing list