Kotlin Meta and CT programming vs D
Dom DiSc
dominikus at scherkl.de
Sat Jan 4 16:12:12 UTC 2025
On Friday, 3 January 2025 at 04:28:36 UTC, Jo Blow wrote:
> On Wednesday, 25 December 2024 at 11:24:00 UTC, Dom DiSc wrote:
>> On Wednesday, 25 December 2024 at 01:45:18 UTC, Jo Blow wrote:
>>> I think the real problem is that we've been building on
>>> systems that were build on very primitive ideas and so it's a
>>> constantly piling on of ad-hoc changes that may or may not
>>> evolve into something more and then a constant need to
>>> maintain them.
>>
>> Yes. That's how evolution works.
>
> There is nothing in evolution that says it has to proceed at a
> high error rate. Many creatures have barely evolved over
> millions of years.
Nobody says that. On the contrary: if something is perfectly
fitting its niche, it does not need to evolve any further.
> The faster one proceeds the more errors will be made. That is
> just a fact.
>
> Evolution is differential geometry at work. All dynamical
> systems and processes(such as AI training, simulations, etc)
> evolve and their error is controlled by a "step size" and the
> larger the step size the worse the results.
No. It is controlled by surviving. If the new thing works, it
will live, if there are errors, it will die - no matter the step
size.
> Humans seem to have a desire to get things done as fast as
> possible and that produces far more errors than necessary.
Re-using something battle-tested makes progress fast, while you
can be sure there is no error in the re-used part.
If you invent everything again from scratch, maybe your
lessons-learned may help you avoid doing the same old errors
again, but there is a (large) probability to introduce new ones.
So you have to test not only the new parts, but also the parts
that you re-invented. And that takes huge amounts of time and
money, so is seldom done.
> So I don't agree with your implication evolution necessarily is
> highly error prone. In fact, if that was the case biology would
> just be nothing but chaos by now.
Who said evolution is error prone?
If something survives, this proves there are no (fatal) errors.
If something dies, it maybe bad luck, but it is more likely it
had some error (meaning: doesn't fit new circumstances).
Of course, if you add big new functions, they are more likely to
fail than if you add only small functions until they survive and
then add the next small function upon that.
You may get lucky and a big new function survives, but that is
far less likely.
If the circumstances change dramatically, something can only
survive with big new changes. This is why nowadays many species
die out. Too unlikely the big new changes will work on the first
try and the small-changes strategy is too slow.
> You clearly understand that the more one works to do it right
> the more likely they will get it right and that it will take
> longer?
Yup. try it often and you likely find the correct solution
eventually.
> Capitalism accelerates the process and this leads to far more
> errors than it would otherwise.
?!? My experience with capitalism is more like: "Do it fast, no
matter if it is full of errors, as long as it seems to work for
the person who pay for it."
>> It's not only the cost to start new from scratch, it's the
>> problem that everything has to become battle-tested again,
>> which takes ages.
>> This is why things like printf are still in use.
>
>
> No, it doesn't have to become battle-tested again. You are
> assuming there is no memory in the system. Humans learn from
> their mistakes.
Having learned to avoid a specific type of error doesn't prevent
you from making new ones.
So every part you touch, you have to test again. It is always
faster to use something that is already tested well than
inventing the same thing again - even if this is now smaller and
faster and less complex and you apply all the lessons learned to
it. You have to test it again because you may have made new
errors (and for sure, all the other old errors may also occur
again as lessons learned is never perfect).
> printf is in use because it works and there is no real need to
> do anything else that would do the same thing.
No. It is used because it works. There IS need to do something
else that would do the same thing, but the risk that this new
thing has new errors is bigger than this need.
The need must be very big, that it is worth the effort plus the
risk to invent something new. That's the point. Re-use is cheaper
and saver.
This is why everything is "piled up" on old things.
The evolution is "piling up things" since billions of years and
no better idea has come up. This is why I call it "unavoidable".
Maybe there is a better strategy, but is was not found in
billions of years. And I'm pretty sure building every up new from
scratch is NOT that better strategy.
>
> When you first wrote a hello world program and made mistakes
> are you saying that if you now go and write one you will make
> just as many mistakes?
No. But still some and maybe new ones.
Of course on something very small and simple, you have a chance
to avoid all mistakes. But the chance to introduce bugs is never
zero.
And it goes up very fast if a system is bigger and more complex.
> 30 years ago most entry level programmers were terrible.
> Because of evolution current entry level programmers are
> typically the equivalent of a seasoned programmer 30 years ago.
> Many kids now days are programming when most kids back then
> didn't even know what a computer was.
Yeah. And that is, because they start on that pile of old cruft
that was build in the last 30 years - instead of inventing it all
new from scratch.
> With your logic one could say "Why invent computers, we have
> the abacus and it works great for our purposes".
No, my logic is "why invent computers if they were already
invented? Lets build the next, better computer re-using most of
the parts of the old computer."
Else we would now use much better steam-machines (because they
are now build new from scratch in 2010), but computers are still
far in the future.
More information about the Digitalmars-d-learn
mailing list