Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge release-0.17 back #3483

Merged
merged 6 commits into from
Nov 20, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 17 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,20 @@ We use _breaking :warning:_ to mark changes that are not backward compatible (re

### Added

- [#3469](https://github.com/thanos-io/thanos/pull/3469) StoreAPI: Added `hints` field to `LabelNamesRequest` and `LabelValuesRequest`. Hints in an opaque data structure that can be used to carry additional information from the store and its content is implementation specific.

### Fixed

-

### Changed

-

## [v0.17.0](https://github.com/thanos-io/thanos/releases/tag/v0.17.0) - 2020.11.18

### Added

- [#3259](https://github.com/thanos-io/thanos/pull/3259) Thanos BlockViewer: Added a button in the blockviewer that allows users to download the metadata of a block.
- [#3261](https://github.com/thanos-io/thanos/pull/3261) Thanos Store: Use segment files specified in meta.json file, if present. If not present, Store does the LIST operation as before.
- [#3276](https://github.com/thanos-io/thanos/pull/3276) Query Frontend: Support query splitting and retry for label names, label values and series requests.
Expand All @@ -31,7 +45,8 @@ We use _breaking :warning:_ to mark changes that are not backward compatible (re
- [#3431](https://github.com/thanos-io/thanos/pull/3431) Store: Added experimental support to lazy load index-headers at query time. When enabled via `--store.enable-index-header-lazy-reader` flag, the store-gateway will load into memory an index-header only once it's required at query time. Index-header will be automatically released after `--store.index-header-lazy-reader-idle-timeout` of inactivity.
- This, generally, reduces baseline memory usage of store when inactive, as well as a total number of mapped files (which is limited to 64k in some systems.
- [#3437](https://github.com/thanos-io/thanos/pull/3437) StoreAPI: Added `hints` field to `LabelNamesResponse` and `LabelValuesResponse`. Hints in an opaque data structure that can be used to carry additional information from the store and its content is implementation specific.
- [#3469](https://github.com/thanos-io/thanos/pull/3469) StoreAPI: Added `hints` field to `LabelNamesRequest` and `LabelValuesRequest`. Hints in an opaque data structure that can be used to carry additional information from the store and its content is implementation specific.
* This, generally, reduces baseline memory usage of store when inactive, as well as a total number of mapped files (which is limited to 64k in some systems.
- [#3415](https://github.com/thanos-io/thanos/pull/3415) Tools: Added `thanos tools bucket mark` command that allows to mark given block for deletion or for no-compact

### Fixed

Expand All @@ -43,6 +58,7 @@ We use _breaking :warning:_ to mark changes that are not backward compatible (re

### Changed

- [#3452](https://github.com/thanos-io/thanos/pull/3452) Store: Index cache posting compression is now enabled by default. Removed `experimental.enable-index-cache-postings-compression` flag.
- [#3410](https://github.com/thanos-io/thanos/pull/3410) Compactor: Changed metric `thanos_compactor_blocks_marked_for_deletion_total` to `thanos_compactor_blocks_marked_total` with `marker` label.
Compactor will now automatically disable compaction for blocks with large index that would output blocks after compaction larger than specified value (by default: 64GB). This automatically
handles the Promethus [format limit](https://github.com/thanos-io/thanos/issues/1424).
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.16.0-dev
0.18.0-dev
7 changes: 1 addition & 6 deletions cmd/thanos/store.go
Original file line number Diff line number Diff line change
Expand Up @@ -94,9 +94,6 @@ func registerStore(app *extkingpin.App) {
"On the contrary, smaller value will increase baseline memory usage, but improve latency slightly. 1 will keep all in memory. Default value is the same as in Prometheus which gives a good balance.").
Hidden().Default(fmt.Sprintf("%v", store.DefaultPostingOffsetInMemorySampling)).Int()

enablePostingsCompression := cmd.Flag("experimental.enable-index-cache-postings-compression", "If true, Store Gateway will reencode and compress postings before storing them into cache. Compressed postings take about 10% of the original size.").
Hidden().Default("false").Bool()

consistencyDelay := extkingpin.ModelDuration(cmd.Flag("consistency-delay", "Minimum age of all blocks before they are being read. Set it to safe value (e.g 30m) if your object storage is eventually consistent. GCS and S3 are (roughly) strongly consistent.").
Default("0s"))

Expand Down Expand Up @@ -150,7 +147,6 @@ func registerStore(app *extkingpin.App) {
},
selectorRelabelConf,
*advertiseCompatibilityLabel,
*enablePostingsCompression,
time.Duration(*consistencyDelay),
time.Duration(*ignoreDeletionMarksDelay),
*webExternalPrefix,
Expand Down Expand Up @@ -185,7 +181,7 @@ func runStore(
blockSyncConcurrency int,
filterConf *store.FilterConfig,
selectorRelabelConf *extflag.PathOrContent,
advertiseCompatibilityLabel, enablePostingsCompression bool,
advertiseCompatibilityLabel bool,
consistencyDelay time.Duration,
ignoreDeletionMarksDelay time.Duration,
externalPrefix, prefixHeader string,
Expand Down Expand Up @@ -311,7 +307,6 @@ func runStore(
blockSyncConcurrency,
filterConf,
advertiseCompatibilityLabel,
enablePostingsCompression,
postingOffsetsInMemSampling,
false,
lazyIndexReaderEnabled,
Expand Down
4 changes: 2 additions & 2 deletions docs/contributing/coding-style-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ defer f.Close() // What if an error occurs here?
Unchecked errors like this can lead to major bugs. Consider the above example: the `*os.File` `Close` method can be responsible
for actually flushing to the file, so if an error occurs at that point, the whole **write might be aborted!** 😱

Always check errors! To make it consistent and not distracting, use our [runutil](https://pkg.go.dev/github.com/thanos-io/thanos@v0.11.0/pkg/runutil?tab=doc)
Always check errors! To make it consistent and not distracting, use our [runutil](https://pkg.go.dev/github.com/thanos-io/thanos@v0.17.0/pkg/runutil?tab=doc)
helper package, e.g.:

```go
Expand Down Expand Up @@ -122,7 +122,7 @@ func writeToFile(...) (err error) {
#### Exhaust Readers

One of the most common bugs is forgetting to close or fully read the bodies of HTTP requests and responses, especially on
error. If you read the body of such structures, you can use the [runutil](https://pkg.go.dev/github.com/thanos-io/thanos@v0.11.0/pkg/runutil?tab=doc)
error. If you read the body of such structures, you can use the [runutil](https://pkg.go.dev/github.com/thanos-io/thanos@v0.17.0/pkg/runutil?tab=doc)
helper as well:

```go
Expand Down
2 changes: 1 addition & 1 deletion docs/operating/cross-cluster-tls-communication.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ metadata:
optional: false
containers:
- name: querier
image: 'thanosio/thanos:v0.15.0'
image: 'thanosio/thanos:v0.17.0'
args:
- query
- '--log.level=info'
Expand Down
63 changes: 27 additions & 36 deletions pkg/store/bucket.go
Original file line number Diff line number Diff line change
Expand Up @@ -247,6 +247,10 @@ type FilterConfig struct {

// BucketStore implements the store API backed by a bucket. It loads all index
// files to local disk.
//
// NOTE: Bucket store reencodes postings using diff+varint+snappy when storing to cache.
// This makes them smaller, but takes extra CPU and memory.
// When used with in-memory cache, memory usage should decrease overall, thanks to postings being smaller.
type BucketStore struct {
logger log.Logger
metrics *bucketStoreMetrics
Expand Down Expand Up @@ -278,10 +282,6 @@ type BucketStore struct {
advLabelSets []labelpb.ZLabelSet
enableCompatibilityLabel bool

// Reencode postings using diff+varint+snappy when storing to cache.
// This makes them smaller, but takes extra CPU and memory.
// When used with in-memory cache, memory usage should decrease overall, thanks to postings being smaller.
enablePostingsCompression bool
postingOffsetsInMemSampling int

// Enables hints in the Series() response.
Expand All @@ -304,7 +304,6 @@ func NewBucketStore(
blockSyncConcurrency int,
filterConfig *FilterConfig,
enableCompatibilityLabel bool,
enablePostingsCompression bool,
postingOffsetsInMemSampling int,
enableSeriesResponseHints bool, // TODO(pracucci) Thanos 0.12 and below doesn't gracefully handle new fields in SeriesResponse. Drop this flag and always enable hints once we can drop backward compatibility.
lazyIndexReaderEnabled bool,
Expand Down Expand Up @@ -336,7 +335,6 @@ func NewBucketStore(
chunksLimiterFactory: chunksLimiterFactory,
partitioner: gapBasedPartitioner{maxGapSize: partitionerMaxGapSize},
enableCompatibilityLabel: enableCompatibilityLabel,
enablePostingsCompression: enablePostingsCompression,
postingOffsetsInMemSampling: postingOffsetsInMemSampling,
enableSeriesResponseHints: enableSeriesResponseHints,
metrics: newBucketStoreMetrics(reg),
Expand Down Expand Up @@ -519,7 +517,6 @@ func (s *BucketStore) addBlock(ctx context.Context, meta *metadata.Meta) (err er
s.chunkPool,
indexHeaderReader,
s.partitioner,
s.enablePostingsCompression,
)
if err != nil {
return errors.Wrap(err, "new bucket block")
Expand Down Expand Up @@ -1379,8 +1376,6 @@ type bucketBlock struct {

partitioner partitioner

enablePostingsCompression bool

// Block's labels used by block-level matchers to filter blocks to query. These are used to select blocks using
// request hints' BlockMatchers.
relabelLabels labels.Labels
Expand All @@ -1397,19 +1392,17 @@ func newBucketBlock(
chunkPool pool.BytesPool,
indexHeadReader indexheader.Reader,
p partitioner,
enablePostingsCompression bool,
) (b *bucketBlock, err error) {
b = &bucketBlock{
logger: logger,
metrics: metrics,
bkt: bkt,
indexCache: indexCache,
chunkPool: chunkPool,
dir: dir,
partitioner: p,
meta: meta,
indexHeaderReader: indexHeadReader,
enablePostingsCompression: enablePostingsCompression,
logger: logger,
metrics: metrics,
bkt: bkt,
indexCache: indexCache,
chunkPool: chunkPool,
dir: dir,
partitioner: p,
meta: meta,
indexHeaderReader: indexHeadReader,
}

// Translate the block's labels and inject the block ID as a label
Expand Down Expand Up @@ -1849,22 +1842,20 @@ func (r *bucketIndexReader) fetchPostings(keys []labels.Label) ([]index.Postings
compressionTime := time.Duration(0)
compressions, compressionErrors, compressedSize := 0, 0, 0

if r.block.enablePostingsCompression {
// Reencode postings before storing to cache. If that fails, we store original bytes.
// This can only fail, if postings data was somehow corrupted,
// and there is nothing we can do about it.
// Errors from corrupted postings will be reported when postings are used.
compressions++
s := time.Now()
bep := newBigEndianPostings(pBytes[4:])
data, err := diffVarintSnappyEncode(bep, bep.length())
compressionTime = time.Since(s)
if err == nil {
dataToCache = data
compressedSize = len(data)
} else {
compressionErrors = 1
}
// Reencode postings before storing to cache. If that fails, we store original bytes.
// This can only fail, if postings data was somehow corrupted,
// and there is nothing we can do about it.
// Errors from corrupted postings will be reported when postings are used.
compressions++
s := time.Now()
bep := newBigEndianPostings(pBytes[4:])
data, err := diffVarintSnappyEncode(bep, bep.length())
compressionTime = time.Since(s)
if err == nil {
dataToCache = data
compressedSize = len(data)
} else {
compressionErrors = 1
}

r.mtx.Lock()
Expand Down
1 change: 0 additions & 1 deletion pkg/store/bucket_e2e_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,6 @@ func prepareStoreWithTestBlocks(t testing.TB, dir string, bkt objstore.Bucket, m
20,
filterConf,
true,
true,
DefaultPostingOffsetInMemorySampling,
true,
true,
Expand Down
8 changes: 1 addition & 7 deletions pkg/store/bucket_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ func TestBucketBlock_matchLabels(t *testing.T) {
},
}

b, err := newBucketBlock(context.Background(), log.NewNopLogger(), newBucketStoreMetrics(nil), meta, bkt, path.Join(dir, blockID.String()), nil, nil, nil, nil, true)
b, err := newBucketBlock(context.Background(), log.NewNopLogger(), newBucketStoreMetrics(nil), meta, bkt, path.Join(dir, blockID.String()), nil, nil, nil, nil)
testutil.Ok(t, err)

cases := []struct {
Expand Down Expand Up @@ -579,7 +579,6 @@ func TestBucketStore_Info(t *testing.T) {
20,
allowAllFilterConf,
true,
true,
DefaultPostingOffsetInMemorySampling,
false,
false,
Expand Down Expand Up @@ -831,7 +830,6 @@ func testSharding(t *testing.T, reuseDisk string, bkt objstore.Bucket, all ...ul
20,
allowAllFilterConf,
true,
true,
DefaultPostingOffsetInMemorySampling,
false,
false,
Expand Down Expand Up @@ -1647,7 +1645,6 @@ func TestSeries_ErrorUnmarshallingRequestHints(t *testing.T) {
10,
nil,
false,
true,
DefaultPostingOffsetInMemorySampling,
true,
false,
Expand Down Expand Up @@ -1741,7 +1738,6 @@ func TestSeries_BlockWithMultipleChunks(t *testing.T) {
10,
nil,
false,
true,
DefaultPostingOffsetInMemorySampling,
true,
false,
Expand Down Expand Up @@ -1886,7 +1882,6 @@ func TestBlockWithLargeChunks(t *testing.T) {
10,
nil,
false,
true,
DefaultPostingOffsetInMemorySampling,
true,
false,
Expand Down Expand Up @@ -2047,7 +2042,6 @@ func setupStoreForHintsTest(t *testing.T) (testutil.TB, *BucketStore, []*storepb
10,
nil,
false,
true,
DefaultPostingOffsetInMemorySampling,
true,
false,
Expand Down
2 changes: 1 addition & 1 deletion tutorials/katacoda/thanos/1-globalview/courseBase.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env bash

docker pull quay.io/prometheus/prometheus:v2.16.0
docker pull quay.io/thanos/thanos:v0.13.0
docker pull quay.io/thanos/thanos:v0.17.0
8 changes: 4 additions & 4 deletions tutorials/katacoda/thanos/1-globalview/step2.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ component and can be invoked in a single command.
Let's take a look at all the Thanos commands:

```
docker run --rm quay.io/thanos/thanos:v0.13.0 --help
docker run --rm quay.io/thanos/thanos:v0.17.0 --help
```{{execute}}

You should see multiple commands that solves different purposes.
Expand Down Expand Up @@ -53,7 +53,7 @@ docker run -d --net=host --rm \
-v $(pwd)/prometheus0_eu1.yml:/etc/prometheus/prometheus.yml \
--name prometheus-0-sidecar-eu1 \
-u root \
quay.io/thanos/thanos:v0.13.0 \
quay.io/thanos/thanos:v0.17.0 \
sidecar \
--http-address 0.0.0.0:19090 \
--grpc-address 0.0.0.0:19190 \
Expand All @@ -68,7 +68,7 @@ docker run -d --net=host --rm \
-v $(pwd)/prometheus0_us1.yml:/etc/prometheus/prometheus.yml \
--name prometheus-0-sidecar-us1 \
-u root \
quay.io/thanos/thanos:v0.13.0 \
quay.io/thanos/thanos:v0.17.0 \
sidecar \
--http-address 0.0.0.0:19091 \
--grpc-address 0.0.0.0:19191 \
Expand All @@ -81,7 +81,7 @@ docker run -d --net=host --rm \
-v $(pwd)/prometheus1_us1.yml:/etc/prometheus/prometheus.yml \
--name prometheus-1-sidecar-us1 \
-u root \
quay.io/thanos/thanos:v0.13.0 \
quay.io/thanos/thanos:v0.17.0 \
sidecar \
--http-address 0.0.0.0:19092 \
--grpc-address 0.0.0.0:19192 \
Expand Down
2 changes: 1 addition & 1 deletion tutorials/katacoda/thanos/1-globalview/step3.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Click below snippet to start the Querier.
```
docker run -d --net=host --rm \
--name querier \
quay.io/thanos/thanos:v0.13.0 \
quay.io/thanos/thanos:v0.17.0 \
query \
--http-address 0.0.0.0:29090 \
--query.replica-label replica \
Expand Down
2 changes: 1 addition & 1 deletion tutorials/katacoda/thanos/7-multi-tenancy/courseBase.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env bash

docker pull quay.io/prometheus/prometheus:v2.20.0
docker pull quay.io/thanos/thanos:v0.16.0-rc.1
docker pull quay.io/thanos/thanos:v0.17.0
docker pull quay.io/thanos/prom-label-proxy:v0.3.0-rc.0-ext1
docker pull caddy:2.2.1

Expand Down
10 changes: 5 additions & 5 deletions tutorials/katacoda/thanos/7-multi-tenancy/step1.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ docker run -d --net=host --rm \
-v $(pwd)/editor/prometheus0_fruit.yml:/etc/prometheus/prometheus.yml \
--name prometheus-0-sidecar-fruit \
-u root \
quay.io/thanos/thanos:v0.16.0-rc.1 \
quay.io/thanos/thanos:v0.17.0 \
sidecar \
--http-address 0.0.0.0:19090 \
--grpc-address 0.0.0.0:19190 \
Expand Down Expand Up @@ -120,7 +120,7 @@ docker run -d --net=host --rm \
-v $(pwd)/editor/prometheus0_veggie.yml:/etc/prometheus/prometheus.yml \
--name prometheus-0-sidecar-veggie \
-u root \
quay.io/thanos/thanos:v0.16.0-rc.1 \
quay.io/thanos/thanos:v0.17.0 \
sidecar \
--http-address 0.0.0.0:19091 \
--grpc-address 0.0.0.0:19191 \
Expand Down Expand Up @@ -152,7 +152,7 @@ docker run -d --net=host --rm \
-v $(pwd)/editor/prometheus1_veggie.yml:/etc/prometheus/prometheus.yml \
--name prometheus-01-sidecar-veggie \
-u root \
quay.io/thanos/thanos:v0.16.0-rc.1 \
quay.io/thanos/thanos:v0.17.0 \
sidecar \
--http-address 0.0.0.0:19092 \
--grpc-address 0.0.0.0:19192 \
Expand All @@ -170,7 +170,7 @@ Fruit:
```
docker run -d --net=host --rm \
--name querier-fruit \
quay.io/thanos/thanos:v0.16.0-rc.1 \
quay.io/thanos/thanos:v0.17.0 \
query \
--http-address 0.0.0.0:29091 \
--grpc-address 0.0.0.0:29191 \
Expand All @@ -183,7 +183,7 @@ Veggie:
```
docker run -d --net=host --rm \
--name querier-veggie \
quay.io/thanos/thanos:v0.16.0-rc.1 \
quay.io/thanos/thanos:v0.17.0 \
query \
--http-address 0.0.0.0:29092 \
--grpc-address 0.0.0.0:29192 \
Expand Down
Loading