You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pyroscope stores symbolic information such as locations, functions, mappings, and strings in column-major order, in parquet format. We define schema dynamically, and have hand-written costruct/deconstruct procedures for each of the models. While it gives us a simple and convenient way to manage and maintain the storage schema, the approach has its own disadvantages:
We always read all the model fields/columns. In the meantime, read/write buffers are allocated for each of the columns, which causes excessive IO and resource usage.
Fairly expensive decoding (~5-7% of the query CPU time).
Read amplification caused by the fact that a partition can overlap parquet column chunk page boundaries.
Despite the small size of the payload, fetching of the partitions is often responsible for tail latencies. The impact is even more pronounced on downsampled/aggregated data.
In the screenshot below you can see that a parquetTableRange.fetch call lasted for 3 seconds (with no good reason – probably it was blocked by async page reader that is shared with profile table reader):
I propose to develop a custom binary format and low level encoders and decoders for the data models. The data should be organised in row-major order. I expect that it will effectively remove symbolic data retrieval from the list of query latency factors.
The text was updated successfully, but these errors were encountered:
Pyroscope stores symbolic information such as
locations
,functions
,mappings
, andstrings
in column-major order, in parquet format. We define schema dynamically, and have hand-written costruct/deconstruct procedures for each of the models. While it gives us a simple and convenient way to manage and maintain the storage schema, the approach has its own disadvantages:In the screenshot below you can see that a
parquetTableRange.fetch
call lasted for 3 seconds (with no good reason – probably it was blocked by async page reader that is shared with profile table reader):I propose to develop a custom binary format and low level encoders and decoders for the data models. The data should be organised in row-major order. I expect that it will effectively remove symbolic data retrieval from the list of query latency factors.
The text was updated successfully, but these errors were encountered: