bigint <-> python long

Ellery Newcomer ellery-newcomer at utulsa.edu
Mon Sep 10 10:24:11 PDT 2012


On 09/05/2012 07:10 PM, bearophile wrote:
>
> NumPy arrays <==> D arrays
>

I've been thinking about this one a bit more, and I am not sure it 
belongs in pyd.

First, the conversion is not symmetric. One can convert a numpy.ndarray 
to a d array like so:

PyObject* ndarray;
double[][] matrix = d_type!(double[][])(ndarray);

however, going back

PyObject* res = _py(matrix);

It is not at all clear that the user wants res to be a numpy.ndarray. 
The problem is partially that d arrays would be overloaded to a few too 
many things (list, str, array, any iterable, any buffer). That last one 
is a doozy. d_type never actually touches ndarray's type, so _py can 
hardly know what to use to convert matrix. (what if ndarray is actually 
a foo.BizBar matrix?)

I could just specialize _py for numpy.ndarrays, defaulting to lists of 
lists (which is what we do already), but I kinda want a specialized type 
for numpy.ndarrays.

Also, all these conversions imply data copying; is this reasonable for 
numpy arrays?

It is easy enough to get a void* and shape information out of the 
ndarray, but building a decent matrix type out of them is not trivial. 
Is there a good matrix library for D that would be suitable for this?

Oh yeah, also: rectangular matrices. For static arrays, the conversion 
is 1 memcpy. For dynamic arrays: lots of memcpys. I suppose I could 
abuse slicing much.


More information about the Digitalmars-d-learn mailing list