looking for a D3 stl project
monkyyy
crazymonkyyy at gmail.com
Mon Jan 15 21:01:54 UTC 2024
On Monday, 15 January 2024 at 19:48:30 UTC, H. S. Teoh wrote:
>
> Adam's pretty good about merging stuff that makes sense.
> Just create a project on github and invite contributors.
I can't imagine *myself* writing the 100 useful algorithms, the
glue code, the 20 or so data structures alone
> My personal goal is to make my algorithms *completely* data
> structure agnostic.
I find that unlikely unless you stick to purely functional
algorithms
in-place sorting and finding indexs/slices in my mind are
imperative
>> Filter will break length, map will break a ref front, if you
>> declare length is higher on the hierarchy than ref front, or
>> vice versa you're necessarily limiting your flexibility and
>> users will find hacks like adding `.array` to move up back up
>> the hierarchy
>
> I'm confused. You just said "feature-based and not
> hierarchy-based" and then you start talking about moving back
> up the hierarchy. Which do you mean?
Im arguing why hierarchy-based suck and how users(me) respond
if I type out 5 chained functions and it says "you don't have a
bidirectional range", I will throw .array in each spot before
thinking about why I'm lacking anything.
>
>> 2. one of those feature sets has indexing, so searching isn't
>> so badly designed and named
>
> Don't understand what exactly you mean here. What does has
> indexing have to do with "searching isn't so badly designed"?
>
> Also, a name is just an identifier; as long as it's not
> ridiculous I don't really care how things are named.
>
The theory of "finding" when your imagining ranges are "views of
data", is you "countUntil" the right element or you return a
range in a "state" that's useful in some way
I view data as having a place where it actually exists, and would
like filter/chunks/slide to fundamentally leave ".index" alone
copy and pasted from my libs:
```d
enum isIter(R)=is(typeof(
(R r){
if(r.empty)return r.front;
r.pop;
return r.front;
}));
enum hasIndex(R)=is(typeof((R r)=>r.index));
auto filter(alias F,R)(R r){
static assert(isIter!R);
struct filter_{
R r;
auto front()=>r.front;
void pop(){
r.pop;
while( (!r.empty) && (!F(r.front)) ){
r.pop;
}
}
bool empty()=>r.empty;
static if(hasIndex!R){
auto index()=>r.index;
}
}
auto temp=filter_(r);
if(temp.empty){return temp;}
if(!F(temp.front)){temp.pop;}
return temp;
}
auto swapkeyvalue(R)(R r){
struct swap{
R r;
auto front()=>r.index;
auto index()=>r.front;
auto pop()=>r.pop;
auto empty()=>r.empty;
}
return swap(r);
}
```
`filter.swapkeyvalue`, should let you take any filter and get a
lazy list of indexs under such a scheme.
>> 5. "composite algorithms" where you reuse smaller pieces are
>> encouraged and not blocked for "you made an extra
>> allocation","to
>> trivail" `auto sumWhere(alias F,R)(R
>> r)=>r.filter!F.reduce((a,b)=>a+b)`
>
> Why should there be an exponential number of functions when you
> could just provide the log(n) number of primitives which the
> user could compose himself to build an exponential variety of
> algorithms?
Map filter and reduce, have to be template hell, but template
hell should be minimized.
I'm not so convinced about all algorithms, and Id like to see a
collection of small algorithms written with base algorithms end
users could try first, if it fails copy and paste and edit.
More information about the Digitalmars-d
mailing list