Mir vs. Numpy: Reworked!

Igor Shirkalin isemsoft at gmail.com
Thu Dec 10 11:50:28 UTC 2020


On Monday, 7 December 2020 at 13:07:23 UTC, 9il wrote:
> On Monday, 7 December 2020 at 12:28:39 UTC, data pulverizer 
> wrote:
>> On Monday, 7 December 2020 at 02:14:41 UTC, 9il wrote:
>>> I don't know. Tensors aren't so complex. The complex part is 
>>> a design that allows Mir to construct and iterate various 
>>> kinds of lazy tensors of any complexity and have quite a 
>>> universal API, and all of these are boosted by the fact that 
>>> the user-provided kernel(lambda) function is optimized by the 
>>> compiler without the overhead.
>>
>> I agree that a basic tensor is not hard to implement, but the 
>> specific design to choose is not always obvious. Your 
>> benchmarks shows that design choices have a large impact on 
>> performance, and performance is certainly a very important 
>> consideration in tensor design.
>>
>> For example I had no idea that your ndslice variant was using 
>> more than one array internally to achieve its performance - it 
>> wasn't obvious to me.
>
> ndslice tensor type uses exactly one iterator. However, the 
> iterator is generic and lazy iterators may contain any number 
> of other iterators and pointers.

How does the iterator of Mir differ from the concept of an 
iterator in D and the use of your own design of tensors and the 
actions that need to be performed on them then in terms of speed 
of execution if we know how to achive it?




More information about the Digitalmars-d-announce mailing list