Google's take on memory safety

aberba karabutaworld at gmail.com
Thu Mar 7 08:12:36 UTC 2024


On Wednesday, 6 March 2024 at 21:34:45 UTC, H. S. Teoh wrote:
> On Wed, Mar 06, 2024 at 09:16:23PM +0000, Sergey via 
> Digitalmars-d wrote:
>> On Wednesday, 6 March 2024 at 19:13:26 UTC, H. S. Teoh wrote:
>> > languages are on their way out. It may take another 20 
>> > years, or it may take 50 years, but make no mistake, their 
>> > demise will
>> 
>> Some CEOs expecting in 5 years nobody will need programming 
>> because of
>> AI :)
>> And AI will be banned to use “unsafe” code :)
> [...]
>
> What people are calling "AI" these days is nothing but a 
> glorified interpolation algorithm, boosted by having access to 
> an internet's load of data it can interpolate from to give it a 
> superficial semblance of "intelligence".  The algorithm is 
> literally unable to produce correct code besides that which has 
> already been written (and published online) by someone else.  
> Ask it to write code that has an existing, correct 
> implementation, and you have a chance of getting correct, 
> working code. Ask it to write something that has never been 
> written before... I'd really look into taking up life insurance 
> before putting the resulting code in production.
>
>
> T

Well, that's pretty much what the idea of training an AI model 
is. You train a model based on existing data to learn from it. 
AIs aren't able to come up with original ideas. However, it 
should be noted that AI does a "bit" better at concocting 
results. Just not original ideas (very same applies humans most 
times actually, but with self+context awareness).

Also depending on the task, the results can be better. For 
example it appears natural language processing has progressed 
better than other AI tasks/fields. For sure, there's too much 
hype, but that's just to bring in VC investment.

My observation.


More information about the Digitalmars-d mailing list