-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for nvCOMP batch API #249
Add support for nvCOMP batch API #249
Conversation
Pull requests from external contributors require approval from a |
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work @Alexey-Kamenev !
Have some minor suggestion for my first review pass.
python/kvikio/nvcomp_codec.py
Outdated
""" | ||
return self.encode_batch([buf])[0] | ||
|
||
def encode_batch(self, bufs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def encode_batch(self, bufs): | |
def encode_batch(self, bufs : List[Any]) -> List[Any]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. I did not add type hints since numcodecs Codec does not use them, so I decided to do the same (but I still prefer to use type hints).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
numcodecs Codec does not use them
Could you please raise an upstream issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
max_chunk_size, | ||
num_chunks, | ||
temp_buf, | ||
comp_chunks, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think comp_chunks
is used afterwards, I guess it should be part of the returned result?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's correct - comp_chunks
is used only as a container that stores pointers to actual chunks. nvCOMP requires this container to be on GPU as well i.e. it's a pointer to pointers and it has to be in GPU memory, same as actual chunk pointers. Once compress
returns, this container is not needed/used anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh got it, could you add some comments describing the nature of comp_chunks
and comp_chunks_header
in more detail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done - also added similar comments to decode_batch
.
* Addressed review feedback. * Added sample Jupyter notebook.
It looks like there are 3 pipeline failures for this PR but I don't think they are related to the PR itself, since the errors look like this:
|
/ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only have some minor comments.
I think an important follow-up PR would be to support GPU memory output of encode_batch
and decode_batch
: #251
max_chunk_size, | ||
num_chunks, | ||
temp_buf, | ||
comp_chunks, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh got it, could you add some comments describing the nature of comp_chunks
and comp_chunks_header
in more detail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks @Alexey-Kamenev !
/ok to test |
/merge |
See #248 for more details.