Continuation of `Having "blessed" 3rd party libraries may make D more popular` DIP thread

WraithGlade wraithglade at protonmail.com
Tue Jul 8 21:30:24 UTC 2025


On Tuesday, 8 July 2025 at 06:41:21 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
> [...]
>
> What we do have is contributions from people, and they may use 
> LLM's.
> This we have a stance on. I brought this to the monthly meeting 
> a couple of months ago. Our decision was that LLM's are a tool, 
> but what we want in contributions is from Humans. Humans must 
> make the decisions, tools are just what we use to get there.
>
> The reason this came up is due to a couple of PR's that were 
> full of LLM generated content, and the comments within the PR 
> itself included LLM content that was not understood. The people 
> involved did not understand what they were attempting to 
> contribute.
>
> As a result of this, I did this PR: 
> https://github.com/dlang/dmd/pull/21364

Thank you for the information and the link.

Such a stance is what I would deem "tolerable but not admirable". 
Enforcement difficulties over the scale and diversity of the 
whole community are the only thing that make it rise to the level 
of tolerable.

----

**My thoughts in greater depth, for those who wish to consider 
it:**

Calling plagiarism-based LLM AI just a "tool" is akin to saying 
that slavery is just a tool for picking cotton more efficiency. 
These AI effectively steal other people's ideas and even personal 
style, the digital manifestation of who they are as a person, 
plus all the time and labor involved in creating their original 
work and refining themselves as a person, and then violate it 
repeatedly, pervasively, and unaccountably for profit.

Stealing from many people in aggregate and then using 
transformations to hide the nature of that (a.k.a. "training the 
AI") is even worse than stealing directly the old fashioned way. 
It violates the boundaries and rights of even more people. The 
advent of these AI is akin to a kind of attempted *de facto* 
digital slavery that people in positions of wealth and power are 
very likely trying to force upon the rest of humanity unwillingly 
and non-consensually to the great harm of the overall good.

This is a matter of basic human rights, not "politics". It is 
very much akin to basic human rights such as to not have one's 
physical body violated against one's will or to have basic bare 
minimum property and privacy rights, etc. I would argue that 
violating someone's personhood via the digital milieu is perhaps 
even worse than violating their body. To violate the mind is an 
even more profound and personal violation than the body. It 
certainly feels that way to me and there is a constant sense of 
being violated at every possible chance. Nothing feels even 
remotely safe in the digital world anymore.

Nothing about this is inevitable either. The common argument that 
"if we don't do it someone else will" is like saying "if someone 
else doesn't steal [some valuable item] then someone else will" 
and using that as if it somehow justifies it. It blatantly 
doesn't and is in fact irrelevant. Another apt analogy is found 
in medicine: cancer cells are similarly selfish at the expense of 
others (growing without bound at the expense of others, just like 
these plagiarism-based AI) yet the immune system repeatedly 
successfully defeats cancer infections throughout the person's 
life.

This implies that *not* engaging in such zero-sum tactics clearly 
works (the immune system literally suppresses such tactics very 
successfully), contrary to the very false popular "logic" that 
"if we don't do it they will" is some kind of well-founded bases 
for making these choices. It isn't. It is quite the opposite. 
These things will ultimately hurt everyone, both the creators and 
the users, just like cancer cells in the body do and just like a 
similar response of strict and swift opposition to such will be 
necessary to prevent long-term disaster.

Anyway, given the pervasiveness of the embrace of false 
justifications for these kinds of plagiarism tools, there may be 
few options available on balance of many factors. One can 
individually choose to strictly avoid such unethical "technology" 
(as I obviously would), but I will say, the commonality of this 
kind of thoughtless disregard for other human beings in our 
industry has darkened my view of this industry more than anything 
else in my entire life. The general public is *right* to view the 
whole tech industry with increasing hostility if such trends 
continue. It shows a lack of empathy and lack of logical 
foresight that is nothing short of staggering.

Those are my personal thoughts, in any case, and I thank anyone 
reading this for considering my thoughts and sharing their time 
with me. I wish you all the best and hope that you choose the 
better path, no matter how much more rocky or inconvenient it may 
sometimes be, as always. I believe people can be better than 
this. The dividing line between good and evil cuts straight down 
the middle of every human heart, to quote the popular saying, and 
it is true. We should always question the norms of our time and 
choose moral paths based on principles and not based merely on 
what is trendy or rhetorically convenient.


More information about the Digitalmars-d mailing list