Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

s3: Respect SignatureV2 flag for all credential providers #3496

Merged
merged 2 commits into from
Nov 25, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ We use _breaking :warning:_ to mark changes that are not backward compatible (re

### Changed

-
- [#3496](https://github.com/thanos-io/thanos/pull/3496) s3: Respect SignatureV2 flag for all credential providers.

## [v0.17.0](https://github.com/thanos-io/thanos/releases/tag/v0.17.0) - 2020.11.18

Expand Down
46 changes: 23 additions & 23 deletions docs/components/tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,19 +66,19 @@ Subcommands:
potentially a noop.

tools bucket rewrite --id=ID [<flags>]
Rewrite chosen blocks in the bucket, while deleting or modifying
seriesResulted block has modified stats in meta.json. Additionally
compaction.sources are altered to not confuse readers of meta.json.Instead
Rewrite chosen blocks in the bucket, while deleting or modifying series
Resulted block has modified stats in meta.json. Additionally
compaction.sources are altered to not confuse readers of meta.json. Instead
thanos.rewrite section is added with useful info like old sources and
deletion requestsNOTE: It's recommended to turn off compactor while doing
deletion requests. NOTE: It's recommended to turn off compactor while doing
this operation. If the compactor is running and touching exactly same block
thatis being rewritten, the resulted rewritten block might only cause
overlap (mitigated by marking overlapping block manually for deletion)and
that is being rewritten, the resulted rewritten block might only cause
overlap (mitigated by marking overlapping block manually for deletion) and
the data you wanted to rewrite could already part of bigger block.

Use FILESYSTEM type of bucket to rewrite block on disk (suitable for vanilla
Prometheus)After rewrite, it's caller responsibility to delete or mark
source block for deletion to avoid overlaps.WARNING: This procedure is
Prometheus) After rewrite, it's caller responsibility to delete or mark
source block for deletion to avoid overlaps. WARNING: This procedure is
*IRREVERSIBLE* after certain time (delete delay), so do backup your blocks
first.

Expand Down Expand Up @@ -172,19 +172,19 @@ Subcommands:
potentially a noop.

tools bucket rewrite --id=ID [<flags>]
Rewrite chosen blocks in the bucket, while deleting or modifying
seriesResulted block has modified stats in meta.json. Additionally
compaction.sources are altered to not confuse readers of meta.json.Instead
Rewrite chosen blocks in the bucket, while deleting or modifying series
Resulted block has modified stats in meta.json. Additionally
compaction.sources are altered to not confuse readers of meta.json. Instead
thanos.rewrite section is added with useful info like old sources and
deletion requestsNOTE: It's recommended to turn off compactor while doing
deletion requests. NOTE: It's recommended to turn off compactor while doing
this operation. If the compactor is running and touching exactly same block
thatis being rewritten, the resulted rewritten block might only cause
overlap (mitigated by marking overlapping block manually for deletion)and
that is being rewritten, the resulted rewritten block might only cause
overlap (mitigated by marking overlapping block manually for deletion) and
the data you wanted to rewrite could already part of bigger block.

Use FILESYSTEM type of bucket to rewrite block on disk (suitable for vanilla
Prometheus)After rewrite, it's caller responsibility to delete or mark
source block for deletion to avoid overlaps.WARNING: This procedure is
Prometheus) After rewrite, it's caller responsibility to delete or mark
source block for deletion to avoid overlaps. WARNING: This procedure is
*IRREVERSIBLE* after certain time (delete delay), so do backup your blocks
first.

Expand Down Expand Up @@ -682,19 +682,19 @@ ts=2020-11-09T00:40:13.703322181Z caller=level.go:63 level=info msg="changelog w
```$
usage: thanos tools bucket rewrite --id=ID [<flags>]

Rewrite chosen blocks in the bucket, while deleting or modifying seriesResulted
Rewrite chosen blocks in the bucket, while deleting or modifying series Resulted
block has modified stats in meta.json. Additionally compaction.sources are
altered to not confuse readers of meta.json.Instead thanos.rewrite section is
added with useful info like old sources and deletion requestsNOTE: It's
altered to not confuse readers of meta.json. Instead thanos.rewrite section is
added with useful info like old sources and deletion requests. NOTE: It's
recommended to turn off compactor while doing this operation. If the compactor
is running and touching exactly same block thatis being rewritten, the resulted
is running and touching exactly same block that is being rewritten, the resulted
rewritten block might only cause overlap (mitigated by marking overlapping block
manually for deletion)and the data you wanted to rewrite could already part of
manually for deletion) and the data you wanted to rewrite could already part of
bigger block.

Use FILESYSTEM type of bucket to rewrite block on disk (suitable for vanilla
Prometheus)After rewrite, it's caller responsibility to delete or mark source
block for deletion to avoid overlaps.WARNING: This procedure is *IRREVERSIBLE*
Prometheus) After rewrite, it's caller responsibility to delete or mark source
block for deletion to avoid overlaps. WARNING: This procedure is *IRREVERSIBLE*
after certain time (delete delay), so do backup your blocks first.

Flags:
Expand Down
44 changes: 31 additions & 13 deletions pkg/objstore/s3/s3.go
Original file line number Diff line number Diff line change
Expand Up @@ -161,36 +161,54 @@ func NewBucket(logger log.Logger, conf []byte, component string) (*Bucket, error
return NewBucketWithConfig(logger, config, component)
}

type overrideSignerType struct {
credentials.Provider
signerType credentials.SignatureType
}

func (s *overrideSignerType) Retrieve() (credentials.Value, error) {
v, err := s.Provider.Retrieve()
if err != nil {
return v, err
}
if !v.SignerType.IsAnonymous() {
v.SignerType = s.signerType
}
return v, nil
}

// NewBucketWithConfig returns a new Bucket using the provided s3 config values.
func NewBucketWithConfig(logger log.Logger, config Config, component string) (*Bucket, error) {
var chain []credentials.Provider

// TODO(bwplotka): Don't do flags as they won't scale, use actual params like v2, v4 instead
wrapCredentialsProvider := func(p credentials.Provider) credentials.Provider { return p }
if config.SignatureV2 {
wrapCredentialsProvider = func(p credentials.Provider) credentials.Provider {
return &overrideSignerType{Provider: p, signerType: credentials.SignatureV2}
}
}

if err := validate(config); err != nil {
return nil, err
}
if config.AccessKey != "" {
signature := credentials.SignatureV4
// TODO(bwplotka): Don't do flags, use actual v2, v4 params.
simonswine marked this conversation as resolved.
Show resolved Hide resolved
if config.SignatureV2 {
signature = credentials.SignatureV2
}

chain = []credentials.Provider{&credentials.Static{
chain = []credentials.Provider{wrapCredentialsProvider(&credentials.Static{
Value: credentials.Value{
AccessKeyID: config.AccessKey,
SecretAccessKey: config.SecretKey,
SignerType: signature,
SignerType: credentials.SignatureV4,
},
}}
})}
} else {
chain = []credentials.Provider{
&credentials.EnvAWS{},
&credentials.FileAWSCredentials{},
&credentials.IAM{
wrapCredentialsProvider(&credentials.EnvAWS{}),
wrapCredentialsProvider(&credentials.FileAWSCredentials{}),
wrapCredentialsProvider(&credentials.IAM{
Client: &http.Client{
Transport: http.DefaultTransport,
},
},
}),
}
}

Expand Down