powerline-d (I got an AI to port powerline-shell to D)
Vladimir Marchevsky
vladimmi at gmail.com
Thu Sep 26 13:57:03 UTC 2024
On Thursday, 26 September 2024 at 06:58:28 UTC, FeepingCreature
wrote:
> You can disprove this to yourself by just talking to it. Have a
> chat, have it explain what it was going for. Doesn't always
> work reliably, but that there's *no* understanding there is
> easily disproven.
I don't know whether images can be used here, so just a text
quote of ChatGPT 4o:
> - Human brain weights about 1400 grams. Hamster brain weights
> about 1.4 grams. How many hamsters are needed to create one
> human brain?
> - To determine how many hamsters' brains would equal the weight
> of one human brain, you can divide the weight of a human brain
> by the weight of a hamster brain: [weight calculations here].
> So, it would take 1,000 hamsters' brains to match the weight of
> one human brain.
It's completely obvious how there is no understanding of context
at all. It just matches a simple weight calculation pattern and
completely ignores a clear context of an actual question that
asks something else. You can imagine the effect of such
"off-topic" somewhere deep in the complex generated code. I
wouldn't trust this to write any code that is not triple-checked
by real programmers afterwards...
More information about the Digitalmars-d-announce
mailing list