You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm sorry I don't have a minimal example. I'm trying to determine whether dense_longlong is ever needed in my use case (libigl python bindings). After looking at
my understanding is that dense_longlong would correspond to int128 which for my case is way too big.
However, when I left just dense_int and dense_long (which I thought corresponded to int32 and int64), my windows tests are failing
b = igl.boundary_loop(f)
ValueError: Invalid scalar type (longlong, Row Major) for argument 'f'. Expected one of ['int32', 'int64'].
Here my understanding is that f is read from a .npy file into python as a int64 numpy array. So, I'm confused by this ValueError: message (which I believe is coming from numpyeigen). Is this error message accurate?
I think maybe there's a confusion happening here due to how ambiguous C++ is for the sizes of int, long, longlong etc. But I wonder if there's a numpy_eigen way to avoid this:
For example, in that boundary_loop python binding I have code that currently looks like:
npe_arg(f, dense_int, dense_long )
but what I'd much rather write is:
npe_arg(f, dense_int32, dense_int64)
since I'm defining an input argument, it's convenient if these type names match the python types which are thankfully non-ambiguous.
Adding dense_longlong to the current code seems to get around this issue on windows but inadvertently build int128 bindings on linux/mac?
Is there a current way to guarantee that I build exactly int32 and int64 bindings in numpyeigen?
If not, how hard would it be to support the proposed dense_int32 and dense_int64 type names above?
The text was updated successfully, but these errors were encountered:
You can see that longlong is platform specific. On MSVC, long and int are the same types (32 bit signed integer) and long long is a 64 bit signed integer. On other platforms long is a 64 bit signed integer and long long is either 64 or 128 bits.
In general, I agree that int128 is way too big for most applications. One possible solution is to add types in NumpyEigen dense_int32dense_int64, dense_int128 that are properly ifdef'd and result in the right thing on the right platform. The reason I didn't do this is that it's quite tedious to test (need to try 32 and 64 bit platforms and MSVC/Clang/GCC accross multiple platforms).
I'm sorry I don't have a minimal example. I'm trying to determine whether
dense_longlong
is ever needed in my use case (libigl python bindings). After looking atnumpyeigen/src/codegen_function.py
Lines 23 to 25 in 96c6356
dense_longlong
would correspond toint128
which for my case is way too big.However, when I left just
dense_int
anddense_long
(which I thought corresponded toint32
andint64
), my windows tests are failingHere my understanding is that
f
is read from a.npy
file into python as aint64
numpy array. So, I'm confused by thisValueError:
message (which I believe is coming from numpyeigen). Is this error message accurate?I think maybe there's a confusion happening here due to how ambiguous C++ is for the sizes of
int
,long
,longlong
etc. But I wonder if there's a numpy_eigen way to avoid this:For example, in that
boundary_loop
python binding I have code that currently looks like:npe_arg(f, dense_int, dense_long )
but what I'd much rather write is:
npe_arg(f, dense_int32, dense_int64)
since I'm defining an input argument, it's convenient if these type names match the python types which are thankfully non-ambiguous.
Adding
dense_longlong
to the current code seems to get around this issue on windows but inadvertently build int128 bindings on linux/mac?Is there a current way to guarantee that I build exactly int32 and int64 bindings in numpyeigen?
If not, how hard would it be to support the proposed
dense_int32
anddense_int64
type names above?The text was updated successfully, but these errors were encountered: