You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can we use the java capnproto implementation to efficiently write millions of records into a large binary file and then read those records back?
Let's assume to simplify that only one struct is defined, and that all the records we write are of that struct. Writes and then reads are sequential (streaming).
What is expected for efficiency is the ability to reuse message builders and readers or at least the byte buffers they use internally.
The text was updated successfully, but these errors were encountered:
@dwrensha I have a similar use case with a bytes (Data) list field in a struct which is nested couple of levels in other structs. Each entry of the list can be a couple of MB and potentially thousands of records. Is there a way to read as a stream instead of decoding the full message?
Can we use the java capnproto implementation to efficiently write millions of records into a large binary file and then read those records back?
Let's assume to simplify that only one struct is defined, and that all the records we write are of that struct. Writes and then reads are sequential (streaming).
What is expected for efficiency is the ability to reuse message builders and readers or at least the byte buffers they use internally.
The text was updated successfully, but these errors were encountered: