Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Telemetry component cleanup #4742

Merged
merged 8 commits into from
Jul 23, 2024
Merged

Conversation

twz123
Copy link
Member

@twz123 twz123 commented Jul 10, 2024

Description

  • Updated all telemetry component methods to use pointer receivers in order to preserve any assignments within these methods.
  • Pass on the analytics and Kubernetes clients as arguments instead of storing them in the struct.
  • Remove the internal analyticsClient abstraction, which was basically a copy of the analytics.Client interface.
  • Replace segment token checks with a global one that prevents the component from being added in the first place.
  • Use the version constant directly in the telemetry instead of carrying it along in a struct field.
  • Have the storage type stored as a struct field, instead of getting it from the cluster configuration. The storage type is part of the node config and cannot be reconciled.
  • Add a mutex: The Start/Stop methods and the Reconcile method may be called concurrently by different goroutines. Ensure that there's no races when starting and stopping the telemetry loop. Also replace the bespoke for loop with wait.UntilWithContext, which is tailor-made for that purpose.

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update

How Has This Been Tested?

  • Manual test
  • Auto test added

Checklist:

  • My code follows the style guidelines of this project
  • My commit messages are signed-off
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules
  • I have checked my code and corrected any misspellings

@twz123 twz123 added the chore label Jul 10, 2024
@twz123 twz123 marked this pull request as ready for review July 10, 2024 10:22
@twz123 twz123 requested a review from a team as a code owner July 10, 2024 10:22
@twz123 twz123 requested review from kke and makhov July 10, 2024 10:22
jnummelin
jnummelin previously approved these changes Jul 12, 2024
Copy link
Contributor

This pull request has merge conflicts that need to be resolved.

twz123 added 8 commits July 12, 2024 18:34
There are quite a few assignments in these methods that are lost if the
receiver is not a pointer.

Signed-off-by: Tom Wieczorek <twieczorek@mirantis.com>
Don't store them in the component directly.

Signed-off-by: Tom Wieczorek <twieczorek@mirantis.com>
This was basically a copy of the analytics.Client interface.

Signed-off-by: Tom Wieczorek <twieczorek@mirantis.com>
That way, the componend isn't even registered in case telemetry has not
been enabled during compilation time.

Signed-off-by: Tom Wieczorek <twieczorek@mirantis.com>
Handle its lifecycle directly in the telemetry goroutine.

Signed-off-by: Tom Wieczorek <twieczorek@mirantis.com>
The extra field is not providing any extra value at the moment.

Signed-off-by: Tom Wieczorek <twieczorek@mirantis.com>
The storage type is part of the node config and cannot be reconciled.
Don't take it from the cluster config, but let it be set directly into
the componend via a public fied. This allows for removing the cluster
config from the struct.

Signed-off-by: Tom Wieczorek <twieczorek@mirantis.com>
The Start/Stop methods and the Reconcile method may be called
concurrently by different goroutines. Ensure that there's no races when
starting and stopping the telemetry loop. Also replace the bespoke for
loop with wait.UntilWithContext, which is tailor-made for that purpose.

Signed-off-by: Tom Wieczorek <twieczorek@mirantis.com>
@twz123 twz123 merged commit dcb64c5 into k0sproject:main Jul 23, 2024
80 checks passed
@twz123 twz123 deleted the telemetry-cleanup branch July 23, 2024 12:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants