We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The functions that deal with converting numpy arrays of variable length elements to a buffer and back can be quite slow. Running the test program: https://github.com/HDFGroup/hsds/blob/master/tests/perf/arrayperf/bytes_to_array.py with a million element array gave this output:
$ python bytes_to_array.py getByteArraySize - elapsed: 0.3334 for 1000000 elements, returned 7888327 arrayToBytes - elapsed: 3.1166 for 1000000 elements bytesToArray - elapsed: 1.1793
Not surprising since it's iterating over each element in a loop.
Looked into using numba, but numba doesn't work with numpy arrays of object type. Cython version of arrayUtil?
The text was updated successfully, but these errors were encountered:
mattjala
No branches or pull requests
The functions that deal with converting numpy arrays of variable length elements to a buffer and back can be quite slow.
Running the test program: https://github.com/HDFGroup/hsds/blob/master/tests/perf/arrayperf/bytes_to_array.py with a million element array gave this output:
Not surprising since it's iterating over each element in a loop.
Looked into using numba, but numba doesn't work with numpy arrays of object type.
Cython version of arrayUtil?
The text was updated successfully, but these errors were encountered: