Time for 2.067

Andrei Alexandrescu via Digitalmars-d digitalmars-d at puremagic.com
Wed Feb 4 19:00:53 PST 2015


On 2/2/15 2:42 PM, "Ulrich =?UTF-8?B?S8O8dHRsZXIi?= 
<kuettler at gmail.com>" wrote:
> On Friday, 30 January 2015 at 23:17:09 UTC, Andrei Alexandrescu wrote:
>> Sorry, I thought that was in the bag. Keep current semantics, call it
>> chunkBy. Add the key to each group when the predicate is unary. Make
>> sure aggregate() works nice with chunkBy().
>
> I might miss some information on this, so please forgive my naive
> question. Your requirements seem to be contradictory to me.
>
> 1. aggregate expects a range of ranges

Probably we need to change that because aggregate should integrate 
seamlessly with chunkBy.

> 2. you ask chunkBy to return something that is not a range of ranges

Yah.

> 3. you ask chunkBy to play along nicely with aggregate

Yah.

> There are certainly ways to make this work. Adding a special version of
> aggregate comes to mind. However, I fail to see the rational behind this.

Rationale as discussed is that the key value for each group is useful 
information. Returning a range of ranges would waste that information 
forcing e.g. its recomputation.

> To me the beauty of range is the composibility of "simple" constructs to
> create complex behavior. The current chunkBy does not need to be changed
> to "add the key to each group when the predicate is unary":
>
>   r.map!(pred, "a")
>    .chunkBy!("a[0]")
>    .map!(inner => tuple(inner.front[0], inner.map!"a[1]"));
>
> So I'd like to know why the above is inferior to a rework of the
> chunkBy's implementation. Maybe this is a question for D.learn.

Wouldn't that force recomputation if a more complex expression replaced 
a[0]?


Andrei



More information about the Digitalmars-d mailing list