Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forward Hooks Persist After Destroying FeatureExtractor #72

Closed
ShunsukeOnoo opened this issue Aug 25, 2023 · 0 comments
Closed

Forward Hooks Persist After Destroying FeatureExtractor #72

ShunsukeOnoo opened this issue Aug 25, 2023 · 0 comments

Comments

@ShunsukeOnoo
Copy link
Contributor

Issue

When the FeatureExtractor is initialized in bdpy.recon.torch.icnn.reconstruct, it registers forward hooks on the encoder. However, the registered forward hooks are not erased after the reconstruct is returned and feature_extractor is destroyed. This leads to a problem that the remaining forward hooks keep accumulating features and occupy memory when the same encoder instance is used for reconstruct multiple times.

Here's a snippet that illustrates the behavior of FeatureExtractor

import open_clip
from bdpy.dl.torch import FeatureExtractor
from bdpy.dl.torch import models

model, _,  preprocess = open_clip.create_model_and_transforms('ViT-L-14', pretrained='openai')
layers = ['visual.transformer.resblocks.0']

def reconstruct_mock(model):
    feature_extractor = FeatureExtractor(model, layers, device='cpu', detach=False)
    
reconstruct_mock(model)

layer_object = models._parse_layer_name(model, layers[0])
print(layer_object._forward_hooks)
# OrderedDict([(0, <bdpy.dl.torch.torch.FeatureExtractorHandle at 0x7fc2f698bf40>)])

Yes, the example in the cookbook avoids this problem by initializing encoder for each image, but this is not effective when using larger models.

Suggestion

One possible fix is to add a destructor method that clears forward hooks from the layers:

class FeatureExtractor(object):
    
    ....

    def __del__(self):
        for layer in self.__layers:
            layer_object = models._parse_layer_name(self._encoder, layer)
            layer_object._forward_hooks = OrderedDict()
ShuntaroAoki added a commit that referenced this issue Jul 29, 2024
Fix issue #72 Forward Hooks Persist After Destroying FeatureExtractor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants