std.allocator needs your help

Manu turkeyman at gmail.com
Mon Sep 23 23:06:56 PDT 2013


On 24 September 2013 15:31, Andrei Alexandrescu <
SeeWebsiteForEmail at erdani.org> wrote:

> On 9/23/13 9:56 PM, Manu wrote:
>
>> You can't go wasting GPU memory by overallocating every block.
>>
>
> Only the larger chunk may need to be overallocated if all allocations are
> then rounded up.


I don't follow.
If I want to allocate 4k aligned, then 8k will be allocated (because it
wants to store an offset).
Any smaller allocation let's say, 16 bytes, will round up to 4k. You can't
waste precious gpu ram like that.

A minimum and a maximum (guaranteed without over-allocating) alignment may
be useful.
But I think allocators need to be given the opportunity to do the best it
can.

 It's definitely important that allocator's are able to receive an
>> alignment request, and give them the opportunity to fulfill a dynamic
>> alignment request without always resorting to an over-allocation strategy.
>>
>
> I'd need a bit of convincing. I'm not sure everybody needs to pay for a
> few, and it is quite possible that malloc_align suffers from the same
> fragmentation issues as the next guy. Also, there's always the possibility
> of leaving some bits to lower-level functions.


What are they paying exactly? An extra arg to allocate that can probably be
defaulted?
  void[] allocate(size_t bytes, size_t align = this.alignment) shared;

Or is it the burden of adding the overallocation boilerplate logic to each
allocator for simple allocators that don't want to deal with alignment in a
conservative way?
I imagine that could possibly be automated, the boilerplate could be given
as a library.

void[] allocate(size_t size, size_t align)
{
  size_t allocSize = std.allocator.getSizeCompensatingForAlignment(size,
align);

  void[] mem = ...; // allocation logic using allocSize

  return std.allocator.alignAllocation(mem, align); // adjusts the range,
and maybe write the offset to the prior bytes
}
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20130924/e8b39d26/attachment.html>


More information about the Digitalmars-d mailing list