Challenge: write a reference counted slice that works as much as possible like a built-in slice

Atila Neves atila.neves at gmail.com
Thu Nov 11 09:15:54 UTC 2021


On Tuesday, 9 November 2021 at 19:39:15 UTC, Stanislav Blinov 
wrote:
> On Tuesday, 9 November 2021 at 18:33:01 UTC, Atila Neves wrote:
>> On Tuesday, 9 November 2021 at 17:26:32 UTC, Stanislav Blinov 
>> wrote:
>>> On Tuesday, 9 November 2021 at 17:15:59 UTC, Atila Neves 
>>> wrote:
>>>
>> Could you please explain why you'd rather do that instead of 
>> using the equivalent of C++'s std::{vector, unique_ptr, 
>> shared_ptr} and Rust's std::{vector, unique_ptr, shared_ptr}? 
>> I cannot myself imagine why anyone would want to.
>
> Instead? Not instead. Together with. It's all well and fine to 
> rely on proven library solutions. But I'd rather a D or C++ 
> programmer, when faced with necessity,

I have not yet encountered cases where it would be necessary that 
aren't "I'm implementing the standard library".

> be able to write their allocations correctly, and not hide 
> under a rug because Herb'n'Scott tell 'em that's "bad practice".

I think that decades of experience (and tools like valgrind and 
asan) have shown that programmers aren't able to write their 
allocations correctly.

> We already have at least two (three?) generations of 
> programmers who literally have no clue where memory comes from. 
> If we keep this up, in a couple decades "nobody" (your 
> definition of nobody) would be able to write you a better 
> malloc for your next generation of platforms and hardware.

I don't think this is a problem.

> Take a peek in the learn section. Person asks how to translate 
> a simple C program into D. Buncha answers that *all* amount to 
> "allocate craptons of memory for no reason". At least one from 
> a very well known D educator. Only no one even mentions any 
> allocations at all. Why even talk about it, right?

I wouldn't care about it either.

>> My advice is that unless you have a very good reason not to, 
>> just use the GC and call it a day.
>
> Who, using a SYSTEMS language, should not have a reason to care 
> about their memory?

Me, ~99.9% of the time.

My algorithm:

1. Write code
2. Did I notice it being slow or too big? Go to 4.
3. Move on to the next task
4. Profile and optimise, go to 2.

I definitely don't miss having to make things fit into 48k of 
RAM. I wrote code for a microcontroller once with 1k with 
addressable bits. Good times. Sorta.


More information about the Digitalmars-d mailing list