You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I am interested in using this library for converting from arbitrary/free-form JSON source data to CBOR, and am wondering if there is a known approach for doing so without needing to decode from JSON into interface{}, and then encode interface{} into CBOR.
In theory, since JSON and CBOR (and the other formats this library supports) are all mostly data-model equivalent, is there a path to connecting a decoder into an encoder, and "stream convert" (iterative encode) as tokens are read from the decoder?
Other use cases could be connecting a JSON decoder to a JSON encoder in order to minify or indent the same semantic content while having minimal memory use and while avoiding whole-of-content buffering. Equivalently, canonicalizing or compacting a binary format, potentially even with compaction in-place (i.e. NewEncoderBytes pointing to the same byte slice passed to NewDecoderBytes).
Being able to provide a function to observe or intercept tokens, as can be done with the stdlib's Decoder.Token could also allow for some interesting cases, such as use of JSON Path expressions to extract a subset of the data.
In my particular case, I'm intending to have a regular known-type regular struct with a mix of fixed-type and Ext/ExtRaw fields, where the ExtRaw fields might contain a CBOR tag 24 (Encoded CBOR, a bit like json.RawMessage), and a Data slice populated from separately encoded CBOR data that's ultimately sourced from entirely free-form data. I can arrange to convert the JSON data to CBOR using decode into empty interface, but I'm not sure how to do it in a memory efficient way (i.e. without the redundant reflect-based decode into Go data types).
I look forward to thoughts and guidance on this topic. Thanks!
The text was updated successfully, but these errors were encountered:
By design, for simplicity of use, we tried to keep the API tight and reduce leaks in the abstraction so that we can continue to optimize the internals.
Hi! I am interested in using this library for converting from arbitrary/free-form JSON source data to CBOR, and am wondering if there is a known approach for doing so without needing to decode from JSON into
interface{}
, and then encodeinterface{}
into CBOR.In theory, since JSON and CBOR (and the other formats this library supports) are all mostly data-model equivalent, is there a path to connecting a decoder into an encoder, and "stream convert" (iterative encode) as tokens are read from the decoder?
Other use cases could be connecting a JSON decoder to a JSON encoder in order to minify or indent the same semantic content while having minimal memory use and while avoiding whole-of-content buffering. Equivalently, canonicalizing or compacting a binary format, potentially even with compaction in-place (i.e. NewEncoderBytes pointing to the same byte slice passed to NewDecoderBytes).
Being able to provide a function to observe or intercept tokens, as can be done with the stdlib's Decoder.Token could also allow for some interesting cases, such as use of JSON Path expressions to extract a subset of the data.
In my particular case, I'm intending to have a regular known-type regular struct with a mix of fixed-type and Ext/ExtRaw fields, where the ExtRaw fields might contain a CBOR tag 24 (Encoded CBOR, a bit like json.RawMessage), and a Data slice populated from separately encoded CBOR data that's ultimately sourced from entirely free-form data. I can arrange to convert the JSON data to CBOR using decode into empty interface, but I'm not sure how to do it in a memory efficient way (i.e. without the redundant reflect-based decode into Go data types).
I look forward to thoughts and guidance on this topic. Thanks!
The text was updated successfully, but these errors were encountered: