Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix massive memory leak by breaking retain cycle #4

Merged
merged 1 commit into from
Jan 2, 2021

Conversation

t089
Copy link
Contributor

@t089 t089 commented Dec 31, 2020

Any instance of the GenericKubernetesClient is never deallocated because there is a retain cycle between the client and its jsonDecoder through the date decoding closure which references self. This causes massive memory leak, since everything the client creates, will never be freed from memory (most notably the underlying HTTPClient).

My first idea was to use [unowned self] to break the retain cycle. But this is only safe if the decoder will never outlive the client instance. Since the decoder is a package internal property we cannot guarantee it. We could make it private, but then we might as well hide the whole date formatting internals. So, this PR just makes the date formatters owned only by the decoder. This breaks the retain cycle.

Happy New Year ;)

do not store date formatters on client
@iabudiab
Copy link
Member

iabudiab commented Jan 2, 2021

@t089 Hey 👋 thanks for the PR and Happy New Year to you too 🎊 🎉

As you said, the internal jsonDecoder could be private, however, I'll merge this for now and let's see in what direction the internals will develop 👍

@iabudiab iabudiab merged commit 2f3fa9e into swiftkube:main Jan 2, 2021
@iabudiab iabudiab added bug Something isn't working enhancement New feature or request labels Jan 2, 2021
@t089 t089 deleted the df-retain-cycle branch January 2, 2021 18:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants