Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

store: reduce memory footprint for chunks queries #3937

Merged
merged 12 commits into from
Apr 1, 2021

Conversation

krya-kryak
Copy link
Contributor

@krya-kryak krya-kryak commented Mar 16, 2021

  • I added CHANGELOG entry for this change.
  • Change is not relevant to the end user.

Changes

  • Reduce memory footprint for chunks queries by only keeping in memory those chunks and subchunks requested, throwing away all the unnecessary bits; processed chunks are saved into moderately-sized slabs allocated on demand, so no more than one slab is wasted needlessly.
  • Fixes for two data races one data race found during work on the main part.

Verification

A DownsampledBlockSeries benchmark was added. Note the reduction in allocated memory amount.

$ ~/go/bin/benchstat main_with_new_benchmark.txt slab_benchmark.txt
name                                                                       old time/op    new time/op    delta
BucketSeries/1000000SeriesWith1Samples/1of1000000-8                          90.9ms ± 2%    96.1ms ± 5%   +5.71%  (p=0.008 n=5+5)
BucketSeries/1000000SeriesWith1Samples/10of1000000-8                         90.4ms ± 2%    93.6ms ± 6%   +3.57%  (p=0.032 n=5+5)
BucketSeries/1000000SeriesWith1Samples/1000000of1000000-8                     1.10s ± 7%     1.02s ± 5%   -7.55%  (p=0.032 n=5+5)
BucketSeries/100000SeriesWith100Samples/1of10000000-8                        6.53ms ± 1%    6.50ms ± 1%     ~     (p=1.000 n=5+5)
BucketSeries/100000SeriesWith100Samples/100of10000000-8                      6.47ms ± 0%    6.48ms ± 1%     ~     (p=1.000 n=5+5)
BucketSeries/100000SeriesWith100Samples/10000000of10000000-8                 92.4ms ± 8%    91.9ms ± 8%     ~     (p=1.000 n=5+5)
BucketSeries/1SeriesWith10000000Samples/1of10000000-8                         224µs ± 1%     229µs ± 0%   +1.97%  (p=0.008 n=5+5)
BucketSeries/1SeriesWith10000000Samples/100of10000000-8                       225µs ± 2%     228µs ± 0%     ~     (p=0.151 n=5+5)
BucketSeries/1SeriesWith10000000Samples/10000000of10000000-8                 24.9ms ± 5%    19.5ms ±10%  -21.54%  (p=0.008 n=5+5)
BlockSeries/concurrency:_1-8                                                 11.2ms ± 1%    10.9ms ± 4%   -2.85%  (p=0.016 n=5+5)
BlockSeries/concurrency:_2-8                                                 6.41ms ± 6%    6.27ms ± 3%     ~     (p=0.548 n=5+5)
BlockSeries/concurrency:_4-8                                                 5.56ms ±11%    4.03ms ± 1%  -27.42%  (p=0.008 n=5+5)
BlockSeries/concurrency:_8-8                                                 4.82ms ±17%    3.43ms ± 7%  -28.73%  (p=0.008 n=5+5)
BlockSeries/concurrency:_16-8                                                6.96ms ±10%    3.54ms ±18%  -49.07%  (p=0.008 n=5+5)
BlockSeries/concurrency:_32-8                                                10.6ms ±62%     5.5ms ± 9%  -48.14%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_1-8                 2.92ms ± 5%    3.12ms ± 0%   +6.84%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_2-8                 1.72ms ±11%    1.88ms ± 5%     ~     (p=0.095 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_4-8                 1.09ms ±12%    1.30ms ± 2%  +19.12%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_8-8                  860µs ± 2%    1215µs ± 3%  +41.32%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_16-8                 863µs ± 3%    1198µs ± 1%  +38.92%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_32-8                 923µs ± 5%    1244µs ± 4%  +34.86%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_1-8             2.87ms ± 5%    3.22ms ± 3%  +12.48%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_2-8             1.49ms ± 3%    1.97ms ± 6%  +31.78%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_4-8             1.09ms ±14%    1.25ms ± 5%     ~     (p=0.056 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_8-8              983µs ±14%    1117µs ± 4%  +13.67%  (p=0.032 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_16-8             968µs ±20%    1132µs ± 1%     ~     (p=0.151 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_32-8             896µs ± 4%    1186µs ± 6%  +32.34%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_1-8         2.99ms ± 5%    3.62ms ±30%  +20.99%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_2-8         1.52ms ± 3%    2.05ms ±11%  +34.83%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_4-8         1.05ms ± 4%    1.23ms ± 6%  +16.53%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_8-8          937µs ± 4%    1098µs ± 4%  +17.12%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_16-8        1.04ms ±19%    1.13ms ± 1%     ~     (p=0.151 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_32-8        1.12ms ±19%    1.14ms ± 3%     ~     (p=1.000 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_1-8     3.31ms ±14%    3.44ms ± 2%     ~     (p=0.548 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_2-8     1.70ms ± 4%    2.00ms ± 5%  +17.26%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_4-8     1.24ms ±15%    1.23ms ± 3%     ~     (p=0.548 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_8-8     1.02ms ± 3%    1.10ms ± 4%   +7.76%  (p=0.016 n=4+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_16-8    1.01ms ± 9%    1.12ms ± 2%  +10.91%  (p=0.016 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_32-8    1.09ms ± 8%    1.15ms ± 3%     ~     (p=0.056 n=5+5)

name                                                                       old alloc/op   new alloc/op   delta
BucketSeries/1000000SeriesWith1Samples/1of1000000-8                          62.0MB ± 0%    62.1MB ± 0%   +0.09%  (p=0.008 n=5+5)
BucketSeries/1000000SeriesWith1Samples/10of1000000-8                         62.0MB ± 0%    62.1MB ± 0%   +0.06%  (p=0.008 n=5+5)
BucketSeries/1000000SeriesWith1Samples/1000000of1000000-8                    1.29GB ± 0%    1.25GB ± 0%   -2.90%  (p=0.029 n=4+4)
BucketSeries/100000SeriesWith100Samples/1of10000000-8                        4.82MB ± 0%    4.86MB ± 0%   +0.83%  (p=0.008 n=5+5)
BucketSeries/100000SeriesWith100Samples/100of10000000-8                      4.82MB ± 0%    4.86MB ± 0%   +0.81%  (p=0.008 n=5+5)
BucketSeries/100000SeriesWith100Samples/10000000of10000000-8                  130MB ± 4%     128MB ± 3%     ~     (p=0.548 n=5+5)
BucketSeries/1SeriesWith10000000Samples/1of10000000-8                         177kB ± 0%     214kB ± 0%  +21.03%  (p=0.016 n=4+5)
BucketSeries/1SeriesWith10000000Samples/100of10000000-8                       177kB ± 0%     214kB ± 0%  +21.05%  (p=0.008 n=5+5)
BucketSeries/1SeriesWith10000000Samples/10000000of10000000-8                 44.8MB ± 5%    39.8MB ± 0%  -11.26%  (p=0.008 n=5+5)
BlockSeries/concurrency:_1-8                                                 16.9MB ± 3%     5.4MB ± 6%  -68.36%  (p=0.008 n=5+5)
BlockSeries/concurrency:_2-8                                                 16.9MB ± 8%     5.4MB ± 5%  -67.83%  (p=0.008 n=5+5)
BlockSeries/concurrency:_4-8                                                 15.7MB ± 7%     5.4MB ± 6%  -65.86%  (p=0.008 n=5+5)
BlockSeries/concurrency:_8-8                                                 13.5MB ±20%     5.7MB ±12%  -57.54%  (p=0.008 n=5+5)
BlockSeries/concurrency:_16-8                                                18.2MB ±12%     5.9MB ±14%  -67.62%  (p=0.008 n=5+5)
BlockSeries/concurrency:_32-8                                                30.8MB ±26%     8.1MB ±14%  -73.69%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_1-8                 2.02MB ±16%    0.71MB ± 0%  -64.80%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_2-8                 2.09MB ±19%    0.73MB ± 1%  -65.07%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_4-8                 2.05MB ±20%    0.74MB ± 1%  -64.11%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_8-8                 1.65MB ±20%    0.74MB ± 1%  -55.07%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_16-8                1.80MB ± 5%    0.75MB ± 2%  -58.47%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_32-8                1.81MB ± 7%    0.80MB ± 3%  -55.85%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_1-8             2.11MB ±14%    0.84MB ± 0%  -60.08%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_2-8             2.16MB ± 7%    0.86MB ± 1%  -60.29%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_4-8             2.30MB ±13%    0.84MB ± 1%  -63.50%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_8-8             2.02MB ± 7%    0.84MB ± 1%  -58.38%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_16-8            1.86MB ±10%    0.86MB ± 2%  -53.55%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_32-8            1.87MB ±11%    0.88MB ± 3%  -52.91%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_1-8         2.26MB ± 9%    0.98MB ± 2%  -56.76%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_2-8         2.23MB ±10%    0.99MB ± 1%  -55.54%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_4-8         2.45MB ± 4%    0.94MB ± 2%  -61.69%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_8-8         2.01MB ±28%    0.89MB ± 2%  -55.56%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_16-8        1.86MB ±12%    0.95MB ± 1%  -49.18%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_32-8        2.12MB ±13%    0.95MB ± 3%  -55.19%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_1-8     2.13MB ±15%    1.06MB ± 3%  -50.40%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_2-8     2.43MB ± 6%    1.07MB ± 3%  -56.01%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_4-8     2.41MB ± 9%    1.01MB ± 3%  -57.93%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_8-8     2.25MB ±16%    0.99MB ± 2%  -55.88%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_16-8    2.09MB ±18%    1.04MB ± 1%  -50.34%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_32-8    2.02MB ± 8%    1.07MB ± 4%  -47.11%  (p=0.008 n=5+5)

name                                                                       old allocs/op  new allocs/op  delta
BucketSeries/1000000SeriesWith1Samples/1of1000000-8                           9.69k ± 1%     9.70k ± 0%     ~     (p=0.690 n=5+5)
BucketSeries/1000000SeriesWith1Samples/10of1000000-8                          9.81k ± 0%     9.79k ± 0%     ~     (p=0.341 n=5+5)
BucketSeries/1000000SeriesWith1Samples/1000000of1000000-8                     10.1M ± 0%     10.0M ± 0%   -0.37%  (p=0.008 n=5+5)
BucketSeries/100000SeriesWith100Samples/1of10000000-8                         1.10k ± 0%     1.10k ± 0%     ~     (p=0.762 n=5+5)
BucketSeries/100000SeriesWith100Samples/100of10000000-8                       1.13k ± 0%     1.13k ± 0%     ~     (p=0.190 n=5+5)
BucketSeries/100000SeriesWith100Samples/10000000of10000000-8                  1.01M ± 0%     1.00M ± 0%   -0.34%  (p=0.008 n=5+5)
BucketSeries/1SeriesWith10000000Samples/1of10000000-8                           199 ± 0%       200 ± 0%   +0.50%  (p=0.008 n=5+5)
BucketSeries/1SeriesWith10000000Samples/100of10000000-8                         199 ± 0%       200 ± 0%   +0.50%  (p=0.008 n=5+5)
BucketSeries/1SeriesWith10000000Samples/10000000of10000000-8                   170k ± 0%      167k ± 0%   -1.32%  (p=0.008 n=5+5)
BlockSeries/concurrency:_1-8                                                  32.6k ± 1%     31.8k ± 4%     ~     (p=0.056 n=5+5)
BlockSeries/concurrency:_2-8                                                  32.2k ± 2%     32.6k ± 3%     ~     (p=0.198 n=5+5)
BlockSeries/concurrency:_4-8                                                  33.9k ± 5%     32.4k ± 2%     ~     (p=0.175 n=5+5)
BlockSeries/concurrency:_8-8                                                  36.3k ±12%     31.8k ± 6%  -12.27%  (p=0.032 n=5+5)
BlockSeries/concurrency:_16-8                                                 54.0k ± 9%     32.9k ±16%  -39.00%  (p=0.008 n=5+5)
BlockSeries/concurrency:_32-8                                                 64.7k ±12%     56.9k ± 6%     ~     (p=0.222 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_1-8                  5.70k ± 1%     5.76k ± 0%   +1.05%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_2-8                  5.68k ± 1%     5.76k ± 1%   +1.40%  (p=0.032 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_4-8                  5.73k ± 1%     5.79k ± 1%     ~     (p=0.087 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_8-8                  5.74k ± 2%     5.93k ± 1%   +3.32%  (p=0.016 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_16-8                 5.69k ± 1%     6.07k ± 3%   +6.71%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT],_concurrency:_32-8                 6.04k ± 6%     6.72k ± 5%  +11.25%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_1-8              7.42k ± 1%     7.43k ± 0%     ~     (p=0.841 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_2-8              7.42k ± 1%     7.54k ± 1%   +1.67%  (p=0.032 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_4-8              7.48k ± 1%     7.53k ± 1%     ~     (p=0.310 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_8-8              7.44k ± 1%     7.63k ± 2%   +2.53%  (p=0.016 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_16-8             7.67k ± 5%     8.02k ± 2%     ~     (p=0.151 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM],_concurrency:_32-8             7.69k ± 7%     8.53k ± 3%  +10.95%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_1-8          9.15k ± 1%     9.20k ± 1%     ~     (p=0.310 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_2-8          9.12k ± 1%     9.16k ± 1%     ~     (p=0.548 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_4-8          9.12k ± 1%     9.24k ± 1%     ~     (p=0.135 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_8-8          9.40k ± 1%     9.25k ± 3%     ~     (p=0.151 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_16-8         9.53k ± 5%     9.95k ± 1%   +4.41%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN],_concurrency:_32-8         10.4k ± 9%     10.2k ± 2%     ~     (p=0.690 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_1-8      10.9k ± 1%     10.9k ± 0%     ~     (p=0.905 n=5+4)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_2-8      10.9k ± 1%     10.9k ± 0%     ~     (p=0.421 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_4-8      10.9k ± 1%     11.0k ± 1%     ~     (p=0.151 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_8-8      11.2k ± 4%     11.0k ± 1%     ~     (p=0.548 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_16-8     11.2k ± 4%     11.8k ± 2%   +5.07%  (p=0.008 n=5+5)
DownsampledBlockSeries/aggregates:_[COUNT_SUM_MIN_MAX],_concurrency:_32-8     12.2k ± 8%     12.3k ± 2%     ~     (p=0.690 n=5+5)

Signed-off-by: Vladimir Kononov <krya-kryak@users.noreply.github.com>
Signed-off-by: Vladimir Kononov <krya-kryak@users.noreply.github.com>
Signed-off-by: Vladimir Kononov <krya-kryak@users.noreply.github.com>
Signed-off-by: Vladimir Kononov <krya-kryak@users.noreply.github.com>
Linter failed with: `saviour` is a misspelling of `savior` (misspell)

Signed-off-by: Vladimir Kononov <krya-kryak@users.noreply.github.com>
chunks: map[uint64]chunkenc.Chunk{},
}
}

// addPreload adds the chunk with id to the data set that will be fetched on calling preload.
func (r *bucketChunkReader) addPreload(id uint64) error {
func (r *bucketChunkReader) Chunk(id uint64) (chunkenc.Chunk, error) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chunk and Close methods are unchanged; just moved exported functions closer to receiver definition.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, scratch that. First of all, I've broken Chunk() while changing preload flow. Secondly, Chunk() is no longer used, so I've deleted it altogether.

Signed-off-by: Vladimir Kononov <krya-kryak@users.noreply.github.com>
@@ -1318,8 +1319,6 @@ func benchBucketSeries(t testutil.TB, skipChunk bool, samplesPerSeries, totalSer

if !t.IsBenchmark() {
if !skipChunk {
// Make sure the pool is correctly used. This is expected for 200k numbers.
testutil.Equals(t, numOfBlocks, int(st.chunkPool.(*mockedPool).gets.Load()))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking the number of slabs allocated by chunk pool is now untrivial as it depends on the time series.

@krya-kryak
Copy link
Contributor Author

2021-03-16T12:19:18.5949904Z --- FAIL: TestStoreGateway (35.07s)
2021-03-16T12:19:18.5950809Z     --- FAIL: TestStoreGateway/query_works (25.86s)
2021-03-16T12:19:18.5951573Z === CONT  TestReceive/replication_with_outage
<...>
2021-03-16T12:19:18.7179785Z FAIL
2021-03-16T12:19:18.7180515Z FAIL	github.com/thanos-io/thanos/test/e2e	150.873s
2021-03-16T12:19:18.7181531Z ?   	github.com/thanos-io/thanos/test/e2e/e2ethanos	[no test files]
2021-03-16T12:19:18.7182110Z FAIL
2021-03-16T12:19:18.7183653Z ##[error]make: *** [Makefile:243: test-e2e] Error 1
2021-03-16T12:19:18.7192266Z ##[error]Process completed with exit code 2.

Failing e2e test TestStoreGateway seems to work locally (output cleaned up):

GOROOT=/usr/local/Cellar/go/1.16.2/libexec #gosetup
GOPATH=/Users/vkononov/go #gosetup
/usr/local/Cellar/go/1.16.2/libexec/bin/go test -c -o /private/var/folders/31/qr0fcfm56vs91g1fl524n63m0000gp/T/___TestStoreGateway_in_github_com_thanos_io_thanos_test_e2e github.com/thanos-io/thanos/test/e2e #gosetup
/usr/local/Cellar/go/1.16.2/libexec/bin/go tool test2json -t /private/var/folders/31/qr0fcfm56vs91g1fl524n63m0000gp/T/___TestStoreGateway_in_github_com_thanos_io_thanos_test_e2e -test.v -test.run ^\QTestStoreGateway\E$
=== RUN   TestStoreGateway
=== PAUSE TestStoreGateway
=== CONT  TestStoreGateway
--- PASS: TestStoreGateway (21.98s)
=== RUN   TestStoreGateway/query_works
    --- PASS: TestStoreGateway/query_works (5.05s)
=== RUN   TestStoreGateway/remove_meta.json_from_id1_block
    --- PASS: TestStoreGateway/remove_meta.json_from_id1_block (1.19s)
=== RUN   TestStoreGateway/upload_block_id5,_similar_to_id1
    --- PASS: TestStoreGateway/upload_block_id5,_similar_to_id1 (2.82s)
=== RUN   TestStoreGateway/delete_whole_id2_block_#yolo
    --- PASS: TestStoreGateway/delete_whole_id2_block_#yolo (3.03s)
PASS

Process finished with exit code 0

Copy link
Member

@GiedriusS GiedriusS left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your work! It seems like you have mixed up the order of two benchmarks in your PR's description. The times have decreased, not increased, right? Also, I was able to reproduce the CI failure locally :( Will attach more info once I am able to find something

CHANGELOG.md Outdated
@@ -19,8 +19,11 @@ We use _breaking :warning:_ to mark changes that are not backward compatible (re
### Fixed
- [#3204](https://github.com/thanos-io/thanos/pull/3204) Mixin: Use sidecar's metric timestamp for healthcheck.
- [#3922](https://github.com/thanos-io/thanos/pull/3922) Fix panic in http logging middleware.
- [#3937](https://github.com/thanos-io/thanos/pull/3937) Store: Fix race condition in chunk pool.
- [#3937](https://github.com/thanos-io/thanos/pull/3937) Testutil: Fix race condition encountered during benchmarking.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We typically don't add non user-facing changes here :P

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like you have mixed up the order of two benchmarks in your PR's description. The times have decreased, not increased, right? Also, I was able to reproduce the CI failure locally :( Will attach more info once I am able to find something

time/op did in fact increase in some benchmarks (and went down a bit in the others), it's mostly allocated bytes / op I was aiming here. I'll look into possibility of shaving some cpu time (larger slabs, probaly).

pkg/testutil/testutil.go Outdated Show resolved Hide resolved
c, ok := r.chunks[id]
if !ok {
return nil, errors.Errorf("chunk with ID %d not found", id)
func (r *bucketChunkReader) savior(b []byte) ([]byte, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the terminology here interesting. Maybe a simple name of allocate() would suit it more? I'm not sure who/what is being saved from what 😄

Also, we could potentially probably save even allocations more because this only checks the last r.chunkBytes. So, it should perform worse with allocations of size:

42 12345 1 2 3 4 5

i.e. where their sizes are decreasing because we will always allocate more whereas we could potentially store them at the beginning unless I am mistaken. Maybe it would be worth making this change & putting together a small benchmark to see if it optimizes things even more?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, there is a way to optimize this further - as currently slab "tails" are wasted. This would potentially save up to max(raw values chunk / subchunk size) - 1 per slab (set to 64k in this PR). What is that, 3200 bytes? If so, maximum gain is 4.9% per slab.

However, for the benchmarks we have (filled with random values), average size tends to be less than half of that, so that would cut our estimated maximum profit to, say, 1500 bytes per slab (2.3%). Given that not every case would be that bad, I would estimate actual memory wastes to be around 1.5%.

Moreover, we would need to be lucky and only write chunks that are small enough to be written to the wasted space - so we would not be able to reclaim all of it. All in all, I would consider it a win if we reduce that waste from 1.5% to 1%. So, 5MB save on a 1GB allocation. Do we have bigger fish to fry?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, also - one thing at the time (:

pkg/store/bucket.go Show resolved Hide resolved
pkg/pool/pool.go Show resolved Hide resolved
Signed-off-by: Vladimir Kononov <krya-kryak@users.noreply.github.com>
This reverts commit 194d234.

Signed-off-by: Vladimir Kononov <krya-kryak@users.noreply.github.com>
Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot! Solid work, some nits only. Otherwise looks generally good 💪🏽

pkg/store/bucket.go Outdated Show resolved Hide resolved
pkg/store/bucket.go Outdated Show resolved Hide resolved
pkg/store/bucket.go Outdated Show resolved Hide resolved
chunks: map[uint64]chunkenc.Chunk{},
}
}

// addPreload adds the chunk with id to the data set that will be fetched on calling preload.
func (r *bucketChunkReader) addPreload(id uint64) error {
func (r *bucketChunkReader) Chunk(id uint64) (chunkenc.Chunk, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

return nil
}

// appPreload adds the chunk with id to the data set that will be fetched on calling preload.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// appPreload adds the chunk with id to the data set that will be fetched on calling preload.
// addPreload adds the chunk with id to the data set that will be fetched on calling preload.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you update this with i,j meanings too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Also, renamed addPreload() / preload(), as nothing is preloaded anymore.

pkg/store/bucket.go Show resolved Hide resolved
pkg/store/bucket.go Outdated Show resolved Hide resolved
c, ok := r.chunks[id]
if !ok {
return nil, errors.Errorf("chunk with ID %d not found", id)
func (r *bucketChunkReader) savior(b []byte) ([]byte, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, also - one thing at the time (:

pkg/store/bucket_test.go Outdated Show resolved Hide resolved
pkg/store/bucket_test.go Outdated Show resolved Hide resolved
Signed-off-by: Vladimir Kononov <krya-kryak@users.noreply.github.com>
@krya-kryak krya-kryak changed the title store: reduce memory footprint for chunks queries WIP: store: reduce memory footprint for chunks queries Mar 23, 2021
Copy link
Member

@GiedriusS GiedriusS left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, all comments have been addressed, is this still a WIP @krya-kryak? I would like to merge this nice optimization. Plus, I couldn't reproduce e2e test failures anymore so they are either flaky or GH Actions runners had some issues

@krya-kryak krya-kryak changed the title WIP: store: reduce memory footprint for chunks queries store: reduce memory footprint for chunks queries Mar 27, 2021
@krya-kryak
Copy link
Contributor Author

krya-kryak commented Mar 27, 2021

LGTM, all comments have been addressed, is this still a WIP @krya-kryak? I would like to merge this nice optimization. Plus, I couldn't reproduce e2e test failures anymore so they are either flaky or GH Actions runners had some issues

@GiedriusS, thank you. I've set WIP with thoughts of optimizing CPU usage ( #3937 (comment) ). Unfortunately, I was not able to dedicate some time to it this week :(
If you consider current state of things good enough - I'll be happy to merge it and add performance optimizations (if any) later.

@GiedriusS
Copy link
Member

GiedriusS commented Mar 29, 2021

LGTM, all comments have been addressed, is this still a WIP @krya-kryak? I would like to merge this nice optimization. Plus, I couldn't reproduce e2e test failures anymore so they are either flaky or GH Actions runners had some issues

@GiedriusS, thank you. I've set WIP with thoughts of optimizing CPU usage ( #3937 (comment) ). Unfortunately, I was not able to dedicate some time to it this week :(
If you consider current state of things good enough - I'll be happy to merge it and add performance optimizations (if any) later.

Given that most, if not all, CPU usage happens at the side of Thanos Query, and the increase here is so minuscule in comparison with what happens on the Thanos Query side, I think that merging this as-is should be OK. But, let's wait for the opinion of others.

Signed-off-by: Vladimir Kononov <krya-kryak@users.noreply.github.com>
Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks amazing, thanks! LGTM 💪🏽

@bwplotka bwplotka merged commit d3e60d6 into thanos-io:main Apr 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants