Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local file storage extension #3087

Merged
merged 3 commits into from
Apr 16, 2021

Conversation

djaglowski
Copy link
Member

@djaglowski djaglowski commented Apr 12, 2021

Follows #2883

Will resolve #2287

Summary

This PR includes:

  • A full implementation of a file_storage extension, which can read and write data to the local file system. Any component in the collector may make use of this extension.
  • Updates to stanza/internal to allow stanza-based receivers to use the extension for checkpoints.
  • A new testbed scenario that has the filelogreceiver using the extension

Configuration of the extension is simple.

  file_storage:
  file_storage/all_settings:
    directory: /var/lib/otelcol/mydir
    timeout: 2s

The extension is made available to component's via the host parameter in their Start method:

func (r *receiver) Start(ctx context.Context, host component.Host) error {
	for _, ext := range host.GetExtensions() {
		if se, ok := ext.(storage.Extension); ok {
			client, err := se.GetClient(ctx, component.KindReceiver, r.NamedEntity)
			if err != nil {
				return err
			}
			r.storageClient = client
			return nil
		}
	}
	r.storageClient = storage.NewNopClient()
        ...
}

@codecov
Copy link

codecov bot commented Apr 12, 2021

Codecov Report

Merging #3087 (2c9cbe5) into main (62c6f89) will increase coverage by 0.03%.
The diff coverage is 96.42%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #3087      +/-   ##
==========================================
+ Coverage   91.76%   91.80%   +0.03%     
==========================================
  Files         486      491       +5     
  Lines       23514    23616     +102     
==========================================
+ Hits        21578    21681     +103     
  Misses       1431     1431              
+ Partials      505      504       -1     
Flag Coverage Δ
integration 63.26% <0.00%> (-0.04%) ⬇️
unit 90.81% <96.42%> (+0.04%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
internal/stanza/storage.go 88.23% <88.23%> (ø)
extension/storage/filestorage/extension.go 93.33% <93.33%> (ø)
cmd/otelcontribcol/components.go 88.37% <100.00%> (+0.13%) ⬆️
extension/storage/filestorage/client.go 100.00% <100.00%> (ø)
extension/storage/filestorage/default_others.go 100.00% <100.00%> (ø)
extension/storage/filestorage/factory.go 100.00% <100.00%> (ø)
internal/stanza/factory.go 95.12% <100.00%> (+0.12%) ⬆️
internal/stanza/receiver.go 100.00% <100.00%> (+5.88%) ⬆️
... and 5 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 62c6f89...2c9cbe5. Read the comment docs.

@djaglowski djaglowski force-pushed the storage-extension-2 branch from 8cc4533 to dafd279 Compare April 13, 2021 21:14
@djaglowski djaglowski changed the title WIP - Local file storage extension Local file storage extension Apr 13, 2021
@djaglowski djaglowski marked this pull request as ready for review April 14, 2021 15:45
@djaglowski djaglowski requested a review from a team April 14, 2021 15:45
@tigrannajaryan
Copy link
Member

Please rebase.

Copy link
Member

@tigrannajaryan tigrannajaryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like how minimalistic this turned out to be.
Can we add simple perf tests for the storage and publish the results in the PR?

extension/storage/local/filestorage/client.go Outdated Show resolved Hide resolved
extension/storage/local/filestorage/client.go Outdated Show resolved Hide resolved
extension/storage/local/filestorage/client.go Outdated Show resolved Hide resolved
extension/storage/local/filestorage/client.go Outdated Show resolved Hide resolved
extension/storage/local/filestorage/default_others.go Outdated Show resolved Hide resolved
extension/storage/local/filestorage/extension.go Outdated Show resolved Hide resolved
extension/storage/local/filestorage/extension.go Outdated Show resolved Hide resolved
extension/storage/local/filestorage/extension.go Outdated Show resolved Hide resolved
extension/storage/local/filestorage/factory.go Outdated Show resolved Hide resolved
internal/stanza/storage.go Outdated Show resolved Hide resolved
@djaglowski
Copy link
Member Author

djaglowski commented Apr 15, 2021

The PR includes one additional testbed scenario that utilizes the new extension. It shows nearly no impact on performance. However, this usage of the extension by the filelogreceiver is very light, so this is somewhat expected.

Log10kDPS/filelog                       |PASS  |     15s|    17.8|    19.3|         59|         74|    150000|        150000|
Log10kDPS/filelog_checkpoints           |PASS  |     15s|    17.5|    19.3|         60|         73|    149900|        149900|

I've added a benchmark test directly to the client, and it unfortunately appears to perform quite poorly:
BenchmarkClientOperations-8 30 38213851 ns/op 19442 B/op 120 allocs/op

There is a fairly obvious improvement, which I suggest we include in a separate PR. Stanza used a local map[string][]byte to cache data in memory, and synced the cache to disk on demand. The storage.Client interface does not allow control of the syncing functionality to the components, but I believe it would be reasonable to sync 1) periodically on a timer (possibly configurable, but not necessarily), and 2) at Shutdown.

@djaglowski djaglowski force-pushed the storage-extension-2 branch from 531b722 to ef08ded Compare April 15, 2021 20:37
@tigrannajaryan
Copy link
Member

tigrannajaryan commented Apr 15, 2021

I've added a benchmark test directly to the client, and it unfortunately appears to perform quite poorly:
BenchmarkClientOperations-8 30 38213851 ns/op 19442 B/op 120 allocs/op

This is surprisingly slow, by a few orders of magnitude slower than I would expect and it is not good enough for our other needs (ondisk buffers for exporter queues). On an SSD even if it fsyncs every time I would not expect 38ms per operation. Do you know which operation of the 3 (Get/Set/Delete) is slow?

@djaglowski
Copy link
Member Author

I should have mentioned, that the benchmark was performing all 3 operations per iteration, but obviously this is still very slow.

I've split them into separate benchmarks, and it appears that each performs similarly.

BenchmarkClientGet-8      	      81	  13057659 ns/op	    5923 B/op	      32 allocs/op
BenchmarkClientSet-8      	      87	  12460373 ns/op	    6796 B/op	      45 allocs/op
BenchmarkClientDelete-8   	      82	  12823977 ns/op	    5925 B/op	      32 allocs/op

I will try out the short-term local cache idea and report back.

@tigrannajaryan
Copy link
Member

I will try out the short-term local cache idea and report back.

I am not sure I like this. I'd very much prefer a proper file storage that writes to file immediately (without fsync) so that if we crash the data is still stored. The performance we see is extremely slow and adding a in-memory cache defeats the purpose for high-rate cases.

I think it is OK to merge this PR and ignore the PR for now, but then immediately look for a better KV storage without spending time on adding a cache. I cannot imagine a KV storage being so slow (is it possible we do something wrong?). Surely something else should exist that has more reasonable performance.

@tigrannajaryan
Copy link
Member

OK, from what I see bolt by default fsyncs every transaction. Setting db.NoSync = true improves performance by 3 orders of magnitude on my machine. I believe we don't need a fsync.

@djaglowski
Copy link
Member Author

djaglowski commented Apr 16, 2021

Thanks for looking into this Tigran. I see similar gains.

BenchmarkClientGet-8      	   96832	     12863 ns/op	    5494 B/op	      31 allocs/op
BenchmarkClientSet-8      	   67626	     15891 ns/op	    6351 B/op	      44 allocs/op
BenchmarkClientDelete-8   	   97971	     12196 ns/op	    5493 B/op	      31 allocs/op

Based on the code comments on NoSync, I have some reservations about using it, but as you said, we can likely find a better library.

@djaglowski djaglowski force-pushed the storage-extension-2 branch from edad661 to ac31de1 Compare April 16, 2021 14:47
@tigrannajaryan
Copy link
Member

Based on the code comments on NoSync, I have some reservations about using it, but as you said, we can likely find a better library.

What are your reservations? From what I see with NoSync=true the data is still written to the file (as expected). The only difference is that there is no longer a fsync at the end of each transaction. This means that we are unprotected from hardware failures and from kernel crashes. We are still protected from our process crashes and restarts. I believe that is a good enough guarantee for what we use the storage for. I think it is acceptable for us to loose data or have corrupted data in case of catastrophic events like hardware failure or in case of kernel crashes.

We can always add a config option in the future for users to enable fsync if they feel they need it, but I think a default of fsync=false is a fine approach.

@djaglowski
Copy link
Member Author

djaglowski commented Apr 16, 2021

What are your reservations?

My reservation primarily comes from the comment no the DB.NoSync field, particularly the last line:

type DB struct {
    ...
    // Setting the NoSync flag will cause the database to skip fsync()
    // calls after each commit. This can be useful when bulk loading data
    // into a database and you can restart the bulk load in the event of
    // a system failure or database corruption. Do not set this flag for
    // normal use.
    //
    // If the package global IgnoreNoSync constant is true, this value is
    // ignored.  See the comment on that constant for more details.
    //
    // THIS IS UNSAFE. PLEASE USE WITH CAUTION.
    NoSync bool

I think perhaps the author is just being overly cautionary and that I am echoing that out of ignorance on the implementation details.

Your explanation makes sense to me though. I think you probably have a better understanding and higher level of confidence than I on the nuances of file system operations, so I'm happy to defer to your judgement here and have updated the setting accordingly.

I believe I have addressed all feedback, except what's captured in the following issues: #3148, #3149, #3150, #3152

@tigrannajaryan
Copy link
Member

I think perhaps the author is just being overly cautionary and that I am echoing that out of ignorance on the implementation details.

I quickly traced through bolt code, obviously I don't know it in details, but I can see that the flag controls whether OS fsync is called or no (as I would expect). The warning in comment looks scary but I do not see anything in the code that contradicts my understanding. I suggest that we go with NoSync=true and we can adjust if needed.

@tigrannajaryan
Copy link
Member

LGTM, please resolve the conflicts and we can merge.

Is it possible to add a test that verifies that checkpoint storage works correctly with filelog? I think we should be able to start/stop/start the receiver and see if the checkpoints are picked up. Can be a separate PR.

@djaglowski
Copy link
Member Author

Thanks @tigrannajaryan, conflicts resolved.

The first test in internal/stanza/storage_test.go cycles the receiver and validates that data has persisted.

@tigrannajaryan
Copy link
Member

The first test in internal/stanza/storage_test.go cycles the receiver and validates that data has persisted.

Yes, but it does not verify that the checkpoints functionality actually works, only that a key/value can be stored and retrieved. I want to see an end to end test. Create a log fie, collect part of it, shutdown the receiver, start it again, make sure it continued collection from the right place, nothing was missed, nothing collected twice. Again, no need to add to this PR, but I believe it is very important to have end to end tests like this.

@djaglowski
Copy link
Member Author

@tigrannajaryan Understood. Will track with #3157

Copy link
Member

@tigrannajaryan tigrannajaryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @djaglowski for making a great progress on this.

@tigrannajaryan tigrannajaryan merged commit c95f0c2 into open-telemetry:main Apr 16, 2021
@tigrannajaryan
Copy link
Member

@pmm-sumo I merged this but if you have any comments, particularly with regards to the usage in the disk buffer please feel free to submit issues.

@djaglowski djaglowski deleted the storage-extension-2 branch April 16, 2021 23:02
@pmm-sumo
Copy link
Contributor

@pmm-sumo I merged this but if you have any comments, particularly with regards to the usage in the disk buffer please feel free to submit issues.

This code looks great, I have one comment so far. If I read this piece correctly, there should be a single extension defined at a given time. Is that accurate? I was thinking if maybe the components using storage extension should have the ability to refer a specific one if more than one is configured, but I might be missing something here

@djaglowski
Copy link
Member Author

@pmm-sumo Thanks for taking a close look at it. Your interpretation is correct. The constraint is not necessarily meant to exist long term though - just seemed like a reasonable starting point.

pmatyjasek-sumo added a commit to pmatyjasek-sumo/opentelemetry-collector-contrib that referenced this pull request Apr 28, 2021
Follows open-telemetry#2883

Will resolve #2287

### Summary

This PR includes:
- A full implementation of a `file_storage` extension, which can read and write data to the local file system. Any component in the collector may make use of this extension.
- Updates to `stanza/internal` to allow stanza-based receivers to use the extension for checkpoints.
- A new testbed scenario that has the filelogreceiver using the extension

Configuration of the extension is simple.
```yaml
  file_storage:
  file_storage/all_settings:
    directory: /var/lib/otelcol/mydir
    timeout: 2s
```

The extension is made available to component's via the `host` parameter in their `Start` method:
```go
func (r *receiver) Start(ctx context.Context, host component.Host) error {
	for _, ext := range host.GetExtensions() {
		if se, ok := ext.(storage.Extension); ok {
			client, err := se.GetClient(ctx, component.KindReceiver, r.NamedEntity)
			if err != nil {
				return err
			}
			r.storageClient = client
			return nil
		}
	}
	r.storageClient = storage.NewNopClient()
        ...
}
```
# Conflicts:
#	go.mod
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Design and implement persistence mechanism for log receivers
4 participants