powerline-d (I got an AI to port powerline-shell to D)
FeepingCreature
feepingcreature at gmail.com
Thu Sep 26 06:58:28 UTC 2024
On Tuesday, 24 September 2024 at 07:23:26 UTC, Vladimir
Marchevsky wrote:
> On Monday, 23 September 2024 at 08:46:30 UTC, aberba wrote:
>>
>> You would be surprised how much original code and code
>> modifications LLMs can output. I wouldn't be to quick to
>> dismiss them as mere translation tools.
>>
>> For example, take a look at the intro video on the Zed
>> homepage to see what can be achieved with AI assisted coding
>> (https://zed.dev/)
>
> I've seen that. My point is: while AI sometimes **can** really
> look great doing something, people should always keep in mind
> it's just a complex math intended to generate specific
> patterns. It's not intelligent
If somebody implemented intelligence as an algorithm, what form
would you expect it to take *other* than "complex math generating
specific patterns"?
> it doesn't really understand any context, neither it
> understands anything it outputs.
You can disprove this to yourself by just talking to it. Have a
chat, have it explain what it was going for. Doesn't always work
reliably, but that there's *no* understanding there is easily
disproven.
> Image generation is a great example: there are lot of nice
> images done by AI but there are also tons of garbage produced -
> with wrong limbs, distorted faces, etc, etc.
It should be noted that the text models used by image generators
are, by current-year standards, absolutely tiny. Like, GPT-2
tier. It does not surprise me that they don't understand things,
nor does it say anything about the chat models, which can be a
hundred times or more bigger.
> General-use ChatGPT answers with lots of text meaning barely
> anything or swapping topics is another great example. And while
> you sometimes can be fine with some small mistakes in image,
> coding has no room for that.
As usual - make sure you're using GPT-4 not 3.5!
The question isn't "does it make mistakes", the question is "does
it make more mistakes than I do." And in my experience, Sonnet
makes *less.* His code compiles a lot more reliably than mine
does!
> So, my personal opinion: AI can be great in generating some
> repetitive or well-defined code to do some typing instead of
> human, but it still needs a good programmer to ensure all
> results are correct.
Well, that's the case anyways.
More information about the Digitalmars-d-announce
mailing list