-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix delete updates #6194
Fix delete updates #6194
Conversation
./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell. + ingester 0%
+ distributor 0.3%
+ querier 0%
+ querier/queryrange 0%
+ iter 0%
+ storage 0%
+ chunkenc 0%
+ logql 0%
- loki -0.2% |
@@ -87,6 +88,7 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) { | |||
f.DurationVar(&cfg.DeleteRequestCancelPeriod, "boltdb.shipper.compactor.delete-request-cancel-period", 24*time.Hour, "Allow cancellation of delete request until duration after they are created. Data would be deleted only after delete requests have been older than this duration. Ideally this should be set to at least 24h.") | |||
f.IntVar(&cfg.MaxCompactionParallelism, "boltdb.shipper.compactor.max-compaction-parallelism", 1, "Maximum number of tables to compact in parallel. While increasing this value, please make sure compactor has enough disk space allocated to be able to store and compact as many tables.") | |||
f.StringVar(&cfg.DeletionMode, "boltdb.shipper.compactor.deletion-mode", "whole-stream-deletion", fmt.Sprintf("(Experimental) Deletion mode. Can be one of %v", strings.Join(deletion.AllModes(), "|"))) | |||
f.StringVar(&cfg.Address, "boltdb.shipper.compactor.address", "", "host and port where the compactor API is listening") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding this config to the compactor looks out of place to me. New users already get confused by our configs, so it is better to duplicate it a little than add to the confusion. It could be just me. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went back and forth on this. If we leave it off the compactor config, it becomes a required property on:
- frontend
- querier
- ruler
Do you think that's clearer for the user than having the compactor know it's own address?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about making it part of storage
config? It is anyways something related to storage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there also a case to be made for the common
config, since it's common across multiple components?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Common Config says // Values defined under this common configuration are supersede if a more specific value is defined.
Does that imply that there's also some more specific override elsewhere? If not, it would be a fine place for this to go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that would apply in this case, if we moved it to the common config so that was the only place it's defined, then there would be no superseding. What that comment is meant to convey is that after things from the common config (like storage or ring configs) are applied to the many places those are used, the config.yaml
is read again, so if there is say, a specific ring config for the scheduler, that will supersede whatever was put in common.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I thought about using common
config but due to that comment I thought it was not the right place, same as Travis. If we can use it for this use case then it would be a better option.
} | ||
} | ||
|
||
func NewDeleteRequestsClient(addr string, c doer, opts ...DeleteRequestsStoreOption) (DeleteRequestsClient, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of taking an HTTP client as a parameter, what do you think about accepting a config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to take the client, if it's not too disruptive. I really like this pattern because it makes if very easy to unit test without worrying about the network: pkg/storage/stores/shipper/compactor/deletion/delete_requests_client_test.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall the changes look great to me. I just added 2 comments. Irrelevant to this PR, I also think we should add an in-memory cache for CacheGenClient
same as DeleteRequestsClient
.
pkg/storage/stores/shipper/compactor/deletion/delete_requests_client.go
Outdated
Show resolved
Hide resolved
…client.go Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com>
Caching gennumbers should be covered by the |
./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell. + ingester 0%
+ distributor 0%
+ querier 0%
+ querier/queryrange 0%
+ iter 0%
+ storage 0%
+ chunkenc 0%
+ logql 0%
- loki -0.2% |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did another pass, just added some minor suggestions.
pkg/storage/stores/shipper/compactor/deletion/delete_requests_client.go
Outdated
Show resolved
Hide resolved
./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell. + ingester 0%
+ distributor 0%
+ querier 0%
+ querier/queryrange 0%
+ iter 0%
+ storage 0%
+ chunkenc 0%
+ logql 0%
- loki -0.3% |
./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell. + ingester 0%
- distributor -0.3%
+ querier 0%
+ querier/queryrange 0%
+ iter 0%
+ storage 0%
+ chunkenc 0%
+ logql 0%
- loki -0.3% |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left very minor suggestions. Let us see if we want to move compactor_address
to common
config.
./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell. + ingester 0%
+ distributor 0%
+ querier 0%
+ querier/queryrange 0%
+ iter 0%
+ storage 0%
+ chunkenc 0%
+ logql 0%
- loki -0.9% |
./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell. + ingester 0%
+ distributor 0%
+ querier 0%
+ querier/queryrange 0%
+ iter 0%
+ storage 0%
+ chunkenc 0%
+ logql 0%
- loki -0.3% |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* Fix delete updates * Update pkg/storage/stores/shipper/compactor/deletion/delete_requests_client.go Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com> * Add compactor address to the storage config * review comments * Review feedback Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com> (cherry picked from commit 9b2786b)
* Fix delete updates * Update pkg/storage/stores/shipper/compactor/deletion/delete_requests_client.go Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com> * Add compactor address to the storage config * review comments * Review feedback Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com> (cherry picked from commit 9b2786b) Co-authored-by: Travis Patterson <travis.patterson@grafana.com>
Previously the querier got deletes from the index gateway. The
delete_requests
table was never updated because the component assumes that filenames never change once written (which is how the index files work). Rather than changing the index gateway's assumption about indicies or carving out an exception, this PR makes the querier get delete requests from the delete API itself. That way, there is only one source of truth for deletes and everything just pulls from there.This PR also changes the location of the
compactor_address
configuration. Rather than have it on each component that needs it, the address is now part of the compactor config where any component can just grab it.