Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Dedicated MMSegWandbHook for MMSegmentation (Weights and Biases Integration) #1603

Merged
merged 9 commits into from
Jul 1, 2022

Conversation

ayulockin
Copy link

@ayulockin ayulockin commented May 20, 2022

Motivation

The goal of this PR is to contribute a dedicated Weights and Biases hook for MMSegmentation called MMSegWandbHook.

Modification

The PR adds one new file wandblogger_hook.py where all the Weights and Biases related logc lives, and modifies the eval_hook.py to reuse the validation results.

The feature can easily be used this:

log_config = dict(
            interval=10,
            hooks=[
                dict(type='MMSegWandbHook',
                     init_kwargs={
                         'entity': WANDB_ENTITY,
                         'project': WANDB_PROJECT_NAME
                     },
                     log_checkpoint=True,
                     log_checkpoint_metadata=True,
                     num_eval_images=100)
            ])

Use cases (Optional)

Here are some of the use cases that this PR introduces and should be helpful to the community in general.

Metrics

  • The MMSegWandbLogger will automatically log training and validation metrics.
  • It will log system (CPU/GPU) metrics.
Screen.Recording.2022-05-20.at.4.46.44.PM.mov

Checkpointing with Metadata

  • If log_checkpoint is True, the checkpoint saved at every checkpoint interval will be saved as W&B Artifacts.
  • On top of this, if log_checkpoint_metadata is True, every checkpoint artifact will have metadata associated with it as shown in the recording below.
Screen.Recording.2022-05-20.at.5.00.03.PM.mov

Log Model Prediction 🎉

If num_eval_images > 0, at every evaluation interval, the MMSegWandbHook logs the model prediction as interactive W&B Tables. To know more about W&B Tables, please refer to the docs here.

Screen.Recording.2022-05-20.at.5.02.18.PM.mov

@CLAassistant
Copy link

CLAassistant commented May 20, 2022

CLA assistant check
All committers have signed the CLA.

@MengzhangLI
Copy link
Contributor

Hi, @ayulockin thank you so much for your warm-hearted and detailed PR, we would review it ASAP!

Best,

@MengzhangLI
Copy link
Contributor

By the way, could you please fix the unit test error? Thanks in advance!

@ayulockin
Copy link
Author

Thanks for the quick response. Yes I will fix the unit test.

@MeowZheng MeowZheng requested a review from xiexinch June 6, 2022 02:51
xiexinch and others added 2 commits June 7, 2022 15:17
@xiexinch
Copy link
Collaborator

xiexinch commented Jun 7, 2022

Hi @ayulockin
Thanks for your contribution, I'm trying to fix the circular import error when running the unit test.
It's caused by importing mmseg.api functions at Line 11 in wandblogger_hook.py .
If you have any ideas, we are glad to see your solution.

@xiexinch
Copy link
Collaborator

xiexinch commented Jun 7, 2022

Ref open-mmlab/mmdetection#7459

@ayulockin
Copy link
Author

Hey @xiexinch thanks for letting me know. Yeah this line is something I would not have wanted in the first place.

One of the feature introduced by this PR is to log the model prediction as W&B Tables. In order to do so, I am doing something like this:

        # Save prediction table
        if self.log_evaluation and self.eval_hook._should_evaluate(runner):
            results = self.test_fn(
                runner.model, self.eval_hook.dataloader, show=False)
            # Initialize evaluation table
            self._init_pred_table()
            # Log predictions
            self._log_predictions(results, runner)
            # Log the table
            self._log_eval_table(runner.iter + 1)

The line 202 is where I am getting the prediction result by running the evaluation again (the evaluation is also done by the EvalHook). I would have ideally reused the prediction done in the EvalHook itself, but that will require some modifications to the mmseg/core/evaluation/eval_hooks.py file.

I can make a commit to show the proposed solution but it might require changes in other files (for compatibility). On the flipside it will fix the circular import issue hopefully.

@ayulockin
Copy link
Author

Based on this SO discussion: https://stackoverflow.com/questions/744373/circular-or-cyclic-imports-in-python

I did a minor modification. Not sure if it will fix the issue but worth a try. cc: @xiexinch

@xiexinch
Copy link
Collaborator

xiexinch commented Jun 9, 2022

Based on this SO discussion: https://stackoverflow.com/questions/744373/circular-or-cyclic-imports-in-python

I did a minor modification. Not sure if it will fix the issue but worth a try. cc: @xiexinch

Sorry for the late reply. This method doesn't work.
For quick verification, might just run the unit test on test_config.py:

pytest tests/test_config.py

@ayulockin
Copy link
Author

For quick verification, might just run the unit test on test_config.py:
pytest tests/test_config.py

Thanks for this. Will try the more elaborate solution.

@codecov
Copy link

codecov bot commented Jun 16, 2022

Codecov Report

Merging #1603 (64b1579) into master (46326f6) will decrease coverage by 1.21%.
The diff coverage is 21.33%.

@@            Coverage Diff             @@
##           master    #1603      +/-   ##
==========================================
- Coverage   90.25%   89.04%   -1.22%     
==========================================
  Files         142      144       +2     
  Lines        8477     8636     +159     
  Branches     1428     1458      +30     
==========================================
+ Hits         7651     7690      +39     
- Misses        586      706     +120     
  Partials      240      240              
Flag Coverage Δ
unittests 89.04% <21.33%> (-1.22%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmseg/core/hook/wandblogger_hook.py 17.48% <17.48%> (ø)
mmseg/core/__init__.py 100.00% <100.00%> (ø)
mmseg/core/evaluation/eval_hooks.py 74.54% <100.00%> (+1.99%) ⬆️
mmseg/core/hook/__init__.py 100.00% <100.00%> (ø)
mmseg/apis/inference.py 61.66% <0.00%> (-2.13%) ⬇️
mmseg/models/backbones/vit.py 90.85% <0.00%> (-0.06%) ⬇️
mmseg/models/decode_heads/segformer_head.py 100.00% <0.00%> (ø)
mmseg/datasets/pipelines/test_time_aug.py 96.55% <0.00%> (+0.25%) ⬆️
mmseg/models/segmentors/encoder_decoder.py 88.66% <0.00%> (+0.31%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 46326f6...64b1579. Read the comment docs.

@xiexinch
Copy link
Collaborator

A minor modification with reference to 'eval_hooks.py'

from mmseg.apis import single_gpu_test

aravind-h-v pushed a commit to aravind-h-v/mmsegmentation that referenced this pull request Mar 27, 2023
huajiangjiangLi added a commit to pytorchuser/HDB-Seg that referenced this pull request Apr 12, 2023
…ases Integration) (open-mmlab#1603)

* wandb integration

* wandb integration

* Update mmseg/core/hook/wandblogger_hook.py

Co-authored-by: 谢昕辰 <xiexinch@outlook.com>

* trying to fix circular import issue

* Update mmseg/core/hook/wandblogger_hook.py docstring

Try to activate the CI.

* move import op in func

* add comments to test_fn

Co-authored-by: xiexinch <test767803@foxmail.com>
Co-authored-by: 谢昕辰 <xiexinch@outlook.com>
wjkim81 pushed a commit to wjkim81/mmsegmentation that referenced this pull request Dec 3, 2023
* add associative embedding codec

* refactor decoding [wip]

* refactor decoding process

* add associative embedding codec

* refactor decoding refinements

* add missing keypoint complement and unit test

* support dynamic input_size in decoding

* add unit test for decoding with dynamic size
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants