Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(agent): output buffer persistence #15221

Closed
wants to merge 10 commits into from

Conversation

DStrand1
Copy link
Member

Summary

Implements the write-through buffer persistence strategy detailed in the spec added in #14928.

Currently this PR is mostly a draft of separating the output buffer into multiple implementations, as well as experimentation with a WAL file library. Largest outstanding issue is around metric serialization to []byte.

  • Using the influx parser+serializer causes a cyclical import, but also drops the metric value type field. However this is probably the cleanest option to investigate since it would make the WAL files easy to re-import
  • Another suggestion was to use encoding/gob, which is what the PR is currently doing. However this has issues with un-exported fields, so I need to look more into how to work around this

Checklist

  • No AI generated code was used in this PR

Related issues

Related to #802, #14805

@telegraf-tiger telegraf-tiger bot added the feat Improvement on an existing feature such as adding a new setting/mode to an existing plugin label Apr 24, 2024
Copy link
Member

@srebhan srebhan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @DStrand1 for this draft. It looks quite nice overall and I left some suggestions in the code.

What concerns me a bit is the underlying WAL implementation. Looking through their code I discovered some fundamental flaws.

  1. The WAL file is not opened with SYNC, so any issue on disk or with Telegraf will loose more metrics than expected (in the worst case with a lot of RAM) all metrics are lost...
  2. The filenames are saved with the pattern <user defined name>.<index>.<offset> and are string-sorted, so if the index or offset cross the order-boundary (i.e. a digit is added 9 -> 10) the order is messed up as lala.10.xyz follows lala.1.xyz instead of lala.9.xyz! This will in turn mess up the metric order.
  3. The WAL implementation does not care about removing files, so if you got WAL file(s) and restart Telegraf multiple time the metrics in the WAL will be written multiple times without further handling. Doing this handling outside of the WAL implementation is hard as you cannot know which file was processed.
  4. In the current form the WAL implementation is not capable of truncate the files front-to-back, so metrics are prone to be sent multiple times if the file was not completely flushed...

There are more (smaller) issues, but looking at the issues above I think you should go for another WAL library that is more mature. I looked at https://github.com/tidwall/wal in the past and think it can do what we need. Not a must, just a suggestion. ;-)

config/config.go Outdated Show resolved Hide resolved
models/buffer.go Outdated Show resolved Hide resolved
models/buffer.go Outdated Show resolved Hide resolved
models/buffer_disk.go Outdated Show resolved Hide resolved
models/buffer_disk.go Outdated Show resolved Hide resolved
models/buffer_mem.go Outdated Show resolved Hide resolved
models/buffer.go Outdated Show resolved Hide resolved
@srebhan srebhan self-assigned this Apr 25, 2024
@powersj
Copy link
Contributor

powersj commented Apr 25, 2024

@DStrand1,

Thanks for the PR, super excided to see this. To capture what we talked about in pairs; specifically, to have you address some of the initial comments and switch to the proposed WAL library.

Let's plan to talk through where you are at or any issues with the new library in Monday's pairs.

Thanks!

@DStrand1 DStrand1 requested a review from srebhan April 30, 2024 16:01
Copy link
Member

@srebhan srebhan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @DStrand1 for the nice update! My only concern (despite test-coverage) is the error-handling. IMO we should pass errors up to be able to log them e.g. if we are running out-of-disk. For unrecoverable errors (e.g. corrupt WAL files, non-serializable metrics, etc) we probably should panic, but everything else should propagate up to the plugin level (at least).

Comment on lines 65 to 100
func (b *DiskBuffer) addSingle(metric telegraf.Metric) bool {
err := b.walFile.Write(b.writeIndex(), b.metricToBytes(metric))
metric.Accept()
if err == nil {
b.metricAdded()
return true
}
return false
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it really worth the code duplication or should we use the batch-interface instead?

return index
}

func (b *DiskBuffer) Add(metrics ...telegraf.Metric) int {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we should return an error here to log it or do other mitigation...

return written
}

func (b *DiskBuffer) Batch(batchSize int) []telegraf.Metric {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Return the error.

@DStrand1 DStrand1 force-pushed the fix/802 branch 2 times, most recently from f9cce86 to 19a1e36 Compare June 4, 2024 18:14
@DStrand1 DStrand1 marked this pull request as ready for review June 4, 2024 18:14
Copy link
Member

@srebhan srebhan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update @DStrand1! Here are my major concerns with the disk implementation

  1. The error handling is not in a good shape yet, we panic in quite a few places, allowing for no mitigation or graceful operation. While this is OK for cases where we can be sure no error occurs (like when serializing the metric), I do have some headaches with the WAL writing not producing an error or panicking as there might be a few cases where errors can occur like permission issues, disk full etc.
  2. We must provide a way to close the WAL file in a clean fashion in case the underlying library relies on this. Please extend the interface accordingly!
  3. We cannot accept metrics in the current spot as it is not yet guaranteed to be written to disk. Please carefully check for those issues as we really must guarantee that if we say we accepted the metric it will not be lost!
  4. We are currently prone to accept the metric twice, once in the addBatch path and once when the output calls Accept() for the same metric. IIRC this will cause a panic as the refcount becomes negative... I think each buffer implementation requires an own way of metricWritten...
  5. There is an undefined situation for what happens if the output drops or rejects the metric? Do we remove them from disk? If so, the input will never notice that the metric is dropped as we already accepted it... We must clarify this case (similar for reject)!

models/buffer.go Outdated Show resolved Hide resolved
models/buffer.go Outdated Show resolved Hide resolved
models/buffer_disk.go Outdated Show resolved Hide resolved
models/buffer_disk.go Outdated Show resolved Hide resolved
Comment on lines +73 to +86
dropped := 0
for _, m := range metrics {
if !b.addSingle(m) {
dropped++
}
}
b.BufferSize.Set(int64(b.length()))
return dropped
// todo implement batched writes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't you use the addBatch function here?

models/buffer_disk.go Outdated Show resolved Hide resolved
models/buffer_disk.go Outdated Show resolved Hide resolved
models/buffer_disk.go Outdated Show resolved Hide resolved
Comment on lines 112 to 119
err := b.walFile.WriteBatch(batch)
if err != nil {
return 0 // todo error handle, test if a partial write occur
}
return written
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot do this! Either we panic here if we decide this is the right thing to do or we do proper error handling.

Imagine a use-case where it is crucial to collect all metrics and not loose any and your disk runs full. The user might have a kafka instance for buffering. Now you accepted the metrics in the for-loop above but then writing fails here due to the disk being full... There will be no error, Telegraf will show zero metrics written (just as if no metric arrived at the output) but the metrics are removed from Kafka because we accepted them. Now imaging the fun debugging this situation! :-)

models/mock/metric.go Outdated Show resolved Hide resolved
@DStrand1
Copy link
Member Author

Remaining comments are related to using the WAL file Batch function, I will work on switching to that next

@telegraf-tiger
Copy link
Contributor

@DStrand1 DStrand1 marked this pull request as draft June 20, 2024 20:13
@DStrand1
Copy link
Member Author

Superseded by #15564

@DStrand1 DStrand1 closed this Jun 25, 2024
@DStrand1 DStrand1 deleted the fix/802 branch October 11, 2024 19:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat Improvement on an existing feature such as adding a new setting/mode to an existing plugin
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants